Deploy Dell EMC VxFlex OS v3.x
Deploy Dell EMC VxFlex OS v3.x
x
Version 3.x
Rev 03
November 2019
Copyright © 2019 Dell Inc. or its subsidiaries. All rights reserved.
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property
of their respective owners. Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Figures 7
Tables 9
Preface 11
Introduction 13
Part 1 Linux 15
1 CSV—Complete example..................................................................................................48
2 CSV—2-layer example......................................................................................................48
3 Example of VxFlex OS Installer REST API URI when JSESSION ID is configured as a
header............................................................................................................................. 154
1 Pre-deployment checklist.................................................................................................. 18
2 Linux package formats...................................................................................................... 23
3 NVDIMM information table............................................................................................... 23
4 driver_sync.conf parameters............................................................................................ 44
5 CSV topology spreadsheets.............................................................................................. 49
6 CSV topology spreadsheet-2-layer................................................................................... 53
7 Pre-deployment checklist................................................................................................. 90
8 Response.........................................................................................................................135
9 Licensing error messages.................................................................................................179
As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not be
supported by all versions of the software or hardware currently in use. The product release notes
provide the most up-to-date information on product features.
Contact your Dell EMC technical support professional if a product does not function properly or
does not function as described in this document.
Note: This document was accurate at publication time. Go to Dell EMC Online Support
(https://support.emc.com) to ensure that you are using the latest version of this document.
Previous versions of Dell EMC VxFlex OS were marketed under the name Dell EMC ScaleIO.
Similarly, previous versions of Dell EMC VxFlex Ready Node were marketed under the name Dell
EMC ScaleIO Ready Node.
References to the old names in the product, documentation, or software, etc. will change over
time.
Note: Software and technical aspects apply equally, regardless of the branding of the product.
Related documentation
The release notes for your version includes the latest information for your product.
The following Dell EMC publication sets provide information about your VxFlex OS or VxFlex Ready
Node product:
l VxFlex OS software (downloadable as VxFlex OS Software <version> Documentation set)
l VxFlex Ready Node with AMS (downloadable as VxFlex Ready Node with AMS Documentation
set)
l VxFlex Ready Node no AMS (downloadable as VxFlex Ready Node no AMS Documentation
set)
l VxRack Node 100 Series (downloadable as VxRack Node 100 Series Documentation set)
You can download the release notes, the document sets, and other related documentation from
Dell EMC Online Support.
Typographical conventions
Dell EMC uses the following type style conventions in this document:
Bold Used for names of interface elements, such as names of windows,
dialog boxes, buttons, fields, tab names, key names, and menu paths
(what the user specifically selects or clicks)
Technical support
Go to Dell EMC Online Support and click Service Center. You will see several options for
contacting Dell EMC Technical Support. Note that to open a service request, you must have a
valid support agreement. Contact your Dell EMC sales representative for details about
obtaining a valid support agreement or with questions about your account.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall quality of
the user publications. Send your opinions of this document to [email protected].
Note:
The VxFlex OS Installer also has a REST API that provides install, extend, and uninstall
functionality. Deployment procedures are included in this guide. All the REST API commands
are described in the VxFlex OS REST API Reference Guide.
l Deployment workflow—Linux................................................................................................ 18
l Deployment options—Linux................................................................................................... 18
l Deployment checklist—Linux.................................................................................................18
Deployment workflow—Linux
The VxFlex OS deployment workflow on physical servers, and in non-VMware hyper-converged
infrastructure (HCI) environments such as XenServer and KVM, is described in this summary.
For VxFlex Ready Node or VxRack Node 100 Series systems, refer to your product's Hardware
Configuration and Operating System Installation Guide and Deployment Guide for step-by-step
installation instructions on how to prepare the server and operating system prior to deployment of
VxFlex OS.
The main steps in the VxFlex OS deployment workflow are:
1. Prepare the environment.
2. Start the deployment.
3. Monitor the deployment.
4. For some 2-layer system configurations, install SDCs on external servers.
5. Run the VxFlex OS system analysis tool.
6. Install the VxFlex OS GUI.
7. Install the license.
8. Perform mandatory, recommended, and optional post-deployment tasks.
9. Move the system to production.
Deployment options—Linux
Deploy VxFlex OS on physical servers using the VxFlex OS Installer wizard, or fully customized
option.
Deployment checklist—Linux
Checklist for deployment requirements in Linux-based environments.
The following table provides a checklist of items required for VxFlex OS deployment.
Item Checked
General
Item Checked
All servers intended for hosting VxFlex OS components and the VxFlex OS
Gateway meet the system requirements described in the Getting to Know VxFlex
OS Guide.
Note: The Dell EMC-supplied hardware configurations satisfy all of the
hardware requirements.
Configure disks so they are visible to the operating system according to operating
system and vendor hardware guidelines.
To use secure authentication mode, ensure that OpenSSL 64-bit v1.0.2 or later
on all servers in the system.
On nodes containing NVDIMMs, running on RHEL 7.6 and higher, ensure that the
following package in installed:
kmod-redhat-nfit-3.10.0_957-1.el7_6.x86_64.rpm
The package can be downloaded from this location.
For more information, see here.
Gateway
The server that will host the VxFlex OS Gateway has adequate memory to run the
VxFlex OS Installer (at least 3 GB) and any other applications.
The VxFlex OS Gateway must not be installed on a server that will host either the
SDC component, or on an SDS component that will use RFcache acceleration
feature.
ESXi
For 2-layer system SDCs running on VMware, the host from which you run the
PowerShell (.ps1) script meets the following prerequisites:
l Runs on supported Windows operating systems. For more information, refer
to the VMware compatiblity matrix: https://www.vmware.com/support/
developer/PowerCLI/doc/powercli65r1-compat-matrix.html
l PowerCLI is installed. For vCenter v6.7, PowerCLI v6.5 must be used.
l Java is installed.
l Has incoming and outgoing communication access to the vCenter.
Networking
Item Checked
For 2-layer system SDCs running on VMware, the vSphere web client (Virgo)
server has network connectivity to the host on which the PowerShell script will
be used.
The server which will host the VxFlex OS Gateway has connectivity to both data
and management VxFlex OS networks.
For data networks using IPv6, if you plan to implement a floating virtual IP
address for the MDM, disable the IPv6 DAD setting on the server's relevant
interfaces, using the command:
sysctl net.ipv6.conf.<interface_name>.dad_transmits=0
Prepare the software packages, CSV file (when required), VxFlex OS Gateway, and other items for
VxFlex OS deployment.
Procedure
1. From the Online Support site (https://support.emc.com/products/33925), download the
complete software for this version.
2. Extract the packages from the downloaded file.
3. Save the extracted files in a location with network connectivity with the server on which
VxFlex OS Gateway will be installed.
The VxFlex OS Installer on the gateway will deploy the packages on all the servers in the
system.
Use the packages and the installation commands that match your Linux operating system
environment.
On XenServer servers, the syntax for VxFlex OS CLI commands is siocli, instead of scli.
Note: Before deploying or upgrading CoreOs, hLinux, Oracle Linux (OL), or Ubuntu systems,
ensure that the VxFlex OS environment is prepared, as described in the preparation tasks for
CoreOS, hLinux, Oracle Linux and Ubuntu in the Deploy VxFlex OS Guide.
Serial Number
Namespace
Locator: A7
Serial Number: 17496594
Locator: B7
Serial Number: 174965AC
3. Find the serial number in the output and record it in the NVDIMM information table.
4. Display the correlation between the ID and NMEM device name of each NVDIMM mounted
on the server:
{
"dev": "nmem1",
"id": "802c-0f-1722-174965ac",
"handle": 4097,
"phys_id": 4370,
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 30
}
}
{
"dev": "nmem0",
"id": "802c-0f-1722-17496594",
"handle": 1,
"phys_id": 4358,
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 30
}
}
5. In the output from the previous step, find the device (dev) with the id that partially
correlates with the serial number you discovered previously for the failed device.
For example:
l The NVDIMM output displays serial number 16492521 for the NVDIMM device.
l In the previous step, the output displays the ID of device nmem0 as
802c-0f-1746-802c-0f-1711-16492521.
6. Record the NMEM name in the Device name row of the NVDIMM information table.
7. Correlate between the NMEM DIMM and the namespace/DAX device:
{
"dev": "nmem1",
"id": "802c-0f-1722-174965ac",
"handle": 4097,
"phys_id": 4370,
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 30
}
}
{
"dev": "nmem0",
"id": "802c-0f-1722-17496594",
"handle": 1,
"phys_id": 4358,
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 30
}
{
"dev": "namespace1.0",
"mode": "raw",
"size": 17179869184,
"sector_size": 512,
"blockdev": "pmem1",
"numa_node": 1
}
{
"dev": "namespace0.0",
"mode": "raw",
"size": 17179869184,
"sector_size": 512,
"blockdev": "pmem0",
"numa_node": 0
}
8. In the output displayed in the previous step, locate the namespace that correlates with the
NMEM name and DIMM serial number, and record it in the NVDIMM information table.
In the above example, nmem0's namespace is namespace0.0.
9. Destroy the default namespace that was created for the replacement NVDIMM, using the
namespace discovered in the previous step:
For example, if the replacement NVDIMM maps to namespace0.0, the command is:
10. Create a new, raw nmem device using the region associated with namespace of the failed
device, as recorded in the NVDIMM information table:
For example, if the NVDIMM you replaced mapped to region 0, the command is:
11. Convert the namespace device to the acceleration device name of type /dev/daxX.X:
or
12. Record the acceleration device name in the NVDIMM information table.
13. Run the namespace-to-dax-device correlation command to find the DAX device name
of the replacement NVDIMM:
{
"dev": "namespace1.0",
"mode": "devdax",
"map": "dev",
"size": 16909336576,
"uuid": "c59d6a2d-7eeb-4f32-b27a-9960a327e734",
"daxregion": {
"id": 1,
"size": 16909336576,
"align": 4096,
"devices": [
{
"chardev": "dax1.0",
"size": 16909336576
}
]
},
"numa_node": 1
}
{
"dev": "namespace0.0",
"mode": "devdax",
"map": "dev",
"size": 16909336576,
"uuid": "eff6429c-706f-469e-bab4-a0d34321c186",
"daxregion": {
"id": 0,
"size": 16909336576,
"align": 4096,
"devices": [
{
"chardev": "dax0.0",
"size": 16909336576
}
]
},
"numa_node": 0
}
The DAX device name appears in the output as the chardev value.
14. Record the DAX device name in the NVDIMM information table.
15. Find the full acceleration device path:
/dev/daxX.X
For example:
/dev/dax0.0
16. Record the acceleration device path in the NVDIMM information table.
Results
You are now ready to add the DAX device to the NVDIMM Acceleration Pool.
Note: When deploying the VxFlex OS Gateway, if one of the following ports is not free, the
deployment will fail: 80, 8080, 443, 8443. If this occurs, free the above-mentioned ports and
then repeat the deployment procedure.
Note: To install with a customized Java configuration, see "Advanced and optional VxFlex OS
Gateway tasks".
Procedure
1. From the extracted download file, install the VxFlex OS Gateway on the Linux server, by
running the following command (all on one line):
l RHEL/Centos/Oracle Linux/SLES
l Ubuntu
RFcache acceleration will be enabled (in other words, do not install the VxFlex OS Gateway on
SDS nodes on which the RFcache xcache package will be installed).
l Do not install VxFlex OS Gateway on a server that will be used for a VxFlex OS SDC
component.
l You can use a Windows-based VxFlex OS Installer to deploy VxFlex OS on both Windows and
Linux-based servers.
Note: When deploying the VxFlex OS Gateway, if one of the following ports is not free, the
deployment will fail: 80, 8080, 443, 8443. If this occurs, free the above-mentioned ports and
then repeat the deployment procedure.
Procedure
1. From the extracted download file, copy the VxFlex OS Gateway MSI file to the server where
it will be installed:
EMC-ScaleIO-gateway-3.0-X.<build>-x64.msi
2. Run the file, and enter a new VxFlex OS Gateway admin password that will be used to
access the VxFlex OS Installer.
The password must meet the following criteria:
l At least 8 characters long
l Include at least 3 of the following groups: [a-z], [A-Z], [0-9], special chars (!@#$ …)
3. From the extracted download file, run the Clean_XC_registry.bat script on every
Windows machine to be part of the system.
The script is included in the Complete Windows download package.
4. Proceed with the remaining preparation tasks, as relevant for your system and installation
method. For installation on Ubuntu, Oracle Linux, or CoreOS servers, pay special attention
to the preparation tasks for these operating systems.
Note: The VxFlex OS Gateway and the VxFlex OS Installer can now be configured or
disabled, as explained in "Configure VxFlex OS Gateway properties".
between VASA
Providers.
Results
Configuration is complete.
Procedure
1. Use a REST API command, as described in the VxFlex OS REST API Reference Guide.
2. Edit the user properties file.
3. Restart the VxFlex OS Gateway service:
l Windows: From the Windows Services window, restart the EMC ScaleIO Gateway.
l Linux: Type the following command:
Replace the default self-signed security certificate with your own trusted
certificate
Create your own trusted certificate, and then replace the default certificate with the one that you
created.
Procedure
1. Find the location of keytool on your server, and open it.
It is a part of the Java (JRE or JDK) installation on your server, in the bin directory. For
example:
l C:\Program Files\Java\jdk1.8.0_25\bin\keytool.exe
l /usr/bin/keytool
2. Generate your RSA private key:
a. If you want to define a password, add the following parameters to the command. Use the
same password for both parameters.
Note: Specify a directory outside the VxFlex OS Gateway installation directory for
the newly created keystore file. This will prevent it from being overwritten when the
VxFlex OS Gateway is upgraded or reinstalled.
3. If you already have a Certificate Signing Request (CSR), skip this step.
If you need a CSR, generate one by typing the following command. (If you did not define a
keystore password in the previous step, omit the password flags.)
If a message appears saying that the root is already in the system-wide store, import it
anyway.
6. Import the intermediate certificates, by typing the command. (If you did not define a
keystore password, omit the password flags.)
You must provide a unique alias name for every intermediate certificate that you upload with
this step.
7. Install the SSL Certificate under the same alias that the CSR was created from
(<YOUR_ALIAS> in previous steps), by typing the command (if you did not define a
keystore password, omit the password flags):
Replace the default self-signed security certificate with your own self-signed
certificate
Replace the default self-signed security certificate with your own self-signed security certificate.
About this task
Procedure
1. Find the location of keytool on your server, and open it.
It is usually a part of the Java (JRE or JDK) installation on your server, in the bin directory.
For example:
l C:\Program Files\Java\jdk1.7.0_25\bin\keytool.exe
l /usr/bin/keytool
a. If you want to define a password, add the following parameters to the command. Use the
same password for both parameters.
Note: Specify a directory outside the VxFlex OS Gateway installation directory for
the newly created keystore file. This will prevent it from being overwritten when the
VxFlex OS Gateway is upgraded or reinstalled.
Results
Replacement of the security certificate is complete.
n C:\Program Files\EMC\ScaleIO\Gateway\conf\catalina.properties
n C:\Program Files\EMC\ScaleIO\Gateway\conf\certificates
\.keystore
2. After the upgrade is complete, copy these files back to their original location.
To enable certificate verification, add the following parameters to the file /etc/cinder/
cinder_scaleio.config on the Cinder node:
verify_server_certificate=true
server_certificate_path=<PATH_TO_PEM_FILE>
FOSGWTool is used to upload the private key to the VxFlex OS Gateway. FOSGWTool is located
in:
l Linux: /opt/emc/scaleio/gateway/bin/FOSGWTool.sh
l Windows: C:\Program Files\EMC\ScaleIO\Gateway\bin\ FOSGWTool.bat
Note: In a new installation, if SSH key authentication will be used, there is no need to add node
user name and password credentials to the CSV file used for system deployment.
Procedure
1. Create the private and public key pair on the node where the VxFlex OS Gateway is
installed:
a. In command line, run the command:
cd ~/.ssh/
ssh-keygen -t rsa
This command generates a private and public key. When performing this command, you
can add a passphrase, or generate the key without a passphrase. Two SSH keys are
produced: id_rsa and id_rsa.pub.
2. On one of the VxFlex OS servers, run this command to store the public key in the
authorized_keys file:
The --key_passphrase is optional. Add this if you generated it when you created the key
pair. During deployment, the gateway will use the private key for authentication.
uname -r
3.16.0-62-generic
b. Copy the following text into a browser, and compare the output to the Ubuntu kernel
version in the Dell EMC repository:
ftp://QNzgdxXix:[email protected]
2. To use a mirror repository, you must ensure that the SSH public and SSH private keys are
located in all system nodes in the same path.
The private key should have directory-level permission only (chmod 700
<private_key_path>). The public key can have all permissions. This is not necessary when
using the EMC repository.
3. The GPG key must be located in all system nodes in the same path. This key is required for
using a mirror or Dell EMC repository.
Procedure
1. Use a text editor to open the gatewayUser.properties file, located in the following
directory on the VxFlex OS Gateway server:
/opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes
2. Edit the file by adding the parameters described below.
3. Save and close the file.
4. Import the GPG key by running the following command, on every Ubuntu node that will
contain SDC or RFcache:
After the changes are made, use the VxFlex OS Installer to deploy the system, using the
installation files for Ubuntu.
sdc.kernel.repo.SCINI_REPO_ADDRESS=ftp://QNzgdxXix:[email protected]
l sdc.kernel.repo.SCINI_REPO_USER
Contains the user name that is used as login name. Needed when the protocol is FTP or SFTP.
Example:
sdc.kernel.repo.SCINI_REPO_USER=admin
l sdc.kernel.repo.SCINI_REPO_PASSWORD
Represents the password. Needed only for FTP protocol.
Example:
sdc.kernel.repo.SCINI_REPO_PASSWORD=password
l sdc.kernel.repo.SCINI_REPO_USER_KEY=<path_to_private_key>
Contains a path to the private RSA key used for SSH connections (user identity file). Needed
only for SFTP.
Example:
sdc.kernel.repo.SCINI_REPO_USER_KEY=<path_to_private_key>
l sdc.kernel.repo.SCINI_REPO_HOST_KEY=<path_to_public_key>
Contains a path to the repository server's public host key, used for SSH connections. Needed
only for SFTP.
Example:
sdc.kernel.repo.SCINI_REPO_HOST_KEY=<path_to_public_key>
l sdc.kernel.repo.SCINI_REPO_MODULE_SIGCHECK
Set to 0 or 1, to determine whether the fetched kernel modules must be verified.
Example:
sdc.kernel.repo.SCINI_REPO_MODULE_SIGCHECK=1
l sdc.kernel.repo.SCINI_REPO_EMC_KEY
Contains a path to the EMC GPG public key file that is needed for signature verification. You
can retrieve the GPG key (RPM-GPG-KEY-ScaleIO_<version>.XXX.X.0) from the folder
in the root of the ISO: ScaleIO_<version>.X.X_GPG-RPM-KEY_Download.zip
This is the same file used by LIA to verify RPMs. It needs to be set only if
SCINI_REPO_MODULE_SIGCHECK was set to 1.
Example:
sdc.kernel.repo.SCINI_REPO_EMC_KEY=<path_to_key>/RPM-GPG-KEY-
ScaleIO_3.0.168.0
You can now deploy the system. During the deployment, a driver_sync.conf file is generated
for the SDC and for RFcache in their respective folders:
l SDC: /etc/emc/scaleio/scini_sync
l RFcache: /etc/emc/scaleio/xcache_sync
Host Path
Ubuntu/OL /etc/emc/scaleio/scini_sync/driver_sync.conf
l RFcache
Host Path
Ubuntu/OL /etc/emc/scaleio/xcache_sync/driver_sync.conf
The configuration of the driver_sync script is read from driver_sync.conf (an example
appears below), which contains the following parameters:
Parameter Description
repo_user Identifies the user name used when logging in to the drivers server.
Required for SFTP and FTP methods.
repo_password The FTP server login password (for user repo_user). Required for
FTP method
user_private_rsa_key Contains the path of the private RSA key file used to SFTP
connection. This key file (provided by VxFlex OS on installation)
enables SFTP connection without a password.
local_dir The local directory in which the downloaded SDC/RF drivers are
stored. For SDC/RF clients, this value is set up during installation
and doesn’t normally require a change.
repo_public_rsa_key The path of the public RSA key of the repository server machine
(also known as the host public key).
module_sigcheck Determines whether to verify (1) or not to verify (0) the downloaded
kernel drivers.
The following parameters are used when driver_sync.sh is used to synchronize a local
repository against a remote one.
Example:
###############################################
#driver_sync Configuration file
#Everything after a '#' until the end of the line is ignored
###############################################
#Repository address, prefixed by protocol
repo_address = sftp://localhost/path/to/repo/dir
#repo_address = ftp://localhost/path/to/repo/dir
#repo_address = file://local/path/to/repo/dir
# Repository user (valid for ftp/sftp protocol)
repo_user = scini
# Repository password (valid for ftp protocol)
repo_password = scini
# Local directory for modules
local_dir = /bin/emc/scaleio/scini_cache/
# User's RSA private key file (sftp protocol)
user_private_rsa_key = /bin/emc/scaleio/scini_key
# Repository host public key (sftp protocol)
repo_public_rsa_key = /bin/emc/scaleio/scini_repo_key.pub
# Should the fetched modules' signatures be checked [0, 1]
module_sigcheck = 1
# EMC public signature key (needed when module_sigcheck is 1)
emc_public_gpg_key = /bin/emc/scaleio/emc_key.pub
# Sync pattern (regular expression) for massive retrieve
sync_pattern = .*
Host Path
Ubuntu/OL /etc/emc/scaleio/scini_sync/driver_sync.conf
l RFcache
Host Path
Ubuntu/OL /etc/emc/scaleio/xcache_sync/driver_sync.conf
matrix for your version. CSV file templates are provided as part of the software download, in the
VxFlex OS Gateway software packages. They are also accessible from the VxFlex OS Installer user
interface.
Before you begin
Ensure that the settings you enter comply with VxFlex OS product limits, as described in the
"Product limits" table in the Getting to Know VxFlex OS guide.
Ensure that you have a copy of the following CSV templates, which are provided in the VxFlex OS
Gateway software packages. Decide which template is most suitable for your needs. In this
document, we will refer to and illustrate the CSV file as a spreadsheet.
l Complete—this spreadsheet template contains all available fields, both required and optional.
In particular, it contains an optional section for Storage Pool configuration, which is relevant
for multiple nodes.
l Minimal—this spreadsheet template contains only the required fields. The optional fields
(those that are in the complete spreadsheet, but not in the minimal one) will be assigned
default values.
About this task
Note: During CSV file preparation, note that there are important differences between hyper-
converged and 2-layer topologies.
l In hyper-converged topologies, each row in the upper section represents a server that can
host MDM, SDS and SDC components
l In 2-layer topologies where frontend servers are Linux or Windows-based, SDCs must be
represented by separate, additional rows. The values for SDC-specific rows are described
in "CSV topology for 2-layer deployment". Alternatively, omit the SDCs from the CSV file,
and install them manually after deploying the backend.
l In 2-layer topologies where frontend servers are ESXi-based, omit them from the CSV file.
In this case, the SDCs are installed after deploying the backend, either manually, or using
the vSphere VxFlex OS plug-in.
Note: Save a copy of the CSV file for future use. It is recommended to save the file in a secure
encrypted folder on your operating system, after deployment. When you add nodes to the
system in the future, you will need the CSV file that you used for initial deployment of the
system.
You can edit a CSV file with Microsoft Excel or file-editing software.
Procedure
1. In the CSV file, fill in your site-specific information in the appropriate places, overwriting the
default information provided in the file.
You only need to use one spreadsheet for the installation, as follows:
l To manually enter all configuration details, use the Complete spreadsheet.
l To use default values for the non-mandatory fields, use the Minimal spreadsheet.
l To configure non-default values for columns that are not in the Minimal spreadsheet,
either use the Complete spreadsheet, or copy the column heading from there into the
Minimal spreadsheet, and enter your custom values into the Minimal spreadsheet.
The following figure shows a sample of a Complete CSV file used to deploy Linux nodes in a
hyper-converged system:
Servers
Storage Pools
Balloon callout. Select shape and start typing. Resize box to desired dimensions. Move control
handle to aim pointer at speaker.
The following figure shows part of a sample of a Complete CSV file, where SDCs are defined
below the rows for MDMs and SDSs. In 2-layer configuration, the Linux servers will form the
MDM cluster, and each of those servers will also be SDSs. SDCs can be added here, in
separate rows, or added to the system later.
SDCs
2. For Linux nodes, the Domain column is not relevant. Leave the column blank or remove the
column.
The following table describes the fields in the spreadsheets. The required fields appear in
both spreadsheets. Field names are case-sensitive; the order of the columns is not
significant.
Domain If using a domain user, the name of the domain (not relevant for Linux)
Password used to log in to the node. This should be the password of the user Yes
entered in the Username column.
To authenticate with SSH instead of node passwords, see "Using SSH
Password authentication on the VxFlex OS Gateway.
IP address to be used for multiple purposes. Use this field to designate one IP
address that will be assigned to all of the following: MDM IP, MDM Mgmt IP and
SDS All IP. This option is provided for use cases where separate networks for
IPs data and management are not required.
The MDM role to deploy on this node: Master, Slave, TB, Standby- Yes
Slave, Standby-TB, or blank (if not an MDM).
For more information, see "The MDM cluster" in the Architecture section of
Is MDM/TB Getting to Know VxFlex OS.
Virtual IPs A virtual IP address (VIP) for each possible manager MDM.
This virtual IP address can be used for communication between the MDM cluster
and SDCs. Only one virtual IP address can be mapped to each NIC, with a
maximum of four virtual IP addresses per system.
Virtual IP NICs The NIC to which the virtual IP addresses are mapped.
SDS IP addresses to be used for communication among SDS and SDC nodes only.
Maximum of eight addresses, comma-separated, no spaces. For SDC-only nodes,
SDS-SDC Only IPs enter the IP address in this column.
Storage devices to be added to an SDS. For more than one device, use a comma-
separated list, with no spaces.
Ensure that devices are prepared as described in "Configuring direct attached
storage" in the Architecture section of Getting to Know VxFlex OS.
Device name format on Linux:
/dev/sdb,/dev/sdc
When specifying the SDS device path on a Linux node, use the path according to
how it is listed in cat/proc/partitions (and not according to the output of
fdisk -l). Use persistent device names.
For example:
fdisk output: /dev/mapper/samah-lv1, /dev/sdb
SDS Storage Device List
cat /proc/partitions output: dm-3, sdb
Use these values in the CSV file: /dev/dm-3, /dev/sdb
To enable volume creation, you must add (at least) one device to (at least) three
SDSs, where each SDS is in a separate Fault Set, and each device has a minimum
of 130 GB free storage capacity (an SDS which has not been assigned to a Fault
Set is treated as a Fault Set of its own). You can do that via the CSV file, or at a
later stage. The maximum number of devices per SDS is listed in the “Product
limits” table in the Getting to Know VxFlex OS guide.
Device data is erased when devices are added to an SDS. When adding a device
to an SDS, VxFlex OS will check that the device is clear before adding it. An error
will be returned, per device, if it is found not to be clear.
VxFlex OS might not perform optimally if there are large differences between the
sizes of the devices in the Storage Pool—for example, if one device is as big as
the rest of the devices. After adding devices, you can define how much of the
device capacity is available to VxFlex OS by using the SCLI command
modify_sds_device_capacity.
For optimal performance, try to balance the number of devices of a Storage Pool,
and the capacity of those devices among the relevant SDSs.
When adding devices that were used in a previous VxFlex OS system, follow the
instructions in "Prepare previously used SDS devices".
Sets Storage Pool names. Use this option in one of the following ways to assign
Storage Pools to the devices in the SDS Storage Device List:
l A comma-separated list of StoragePool names; the length of the list must be
the same length as the list of devices in the SDS Storage Device Names list.
In this case, the Storage Pools will be mapped to the devices in that list,
StoragePool List respectively. The list must be comma-separated, with no spaces.
l If one Storage Pool is entered here, the same Storage Pool will be mapped to
all the devices in the SDS Storage Device List.
l If no Storage Pool is listed here, one will be automatically created during
installation, named default.
Optional performance profile to set for SDSs. High (default) or Compact When
this field is left empty, the default option is applied. High performance is required
for the Fine Granularity data layout feature. For more information, see "Configure
perfProfileForSDS performance profile during deployment".
Optional performance profile to set for SDCs: High (default) or Compact. When
this field is left empty, the default option is applied. High performance is required
for the Fine Granularity data layout feature. For more information, see "Configure
perfProfileForSDC performance profile during deployment".
List of SSD devices to provide RFcache acceleration for Medium Granularity data
layout Storage Pools. Up to eight devices, comma-separated, with no spaces. If
RFcache SSD Device List
RFcache is Yes, and this field is left blank, you can add RFcache devices after
installation.
List of NVDIMM devices used to provide acceleration for Storage Pools using
NVDIMM Acceleration
Fine Granularity data layout. Up to four devices, comma-separated, with no
Device List
spaces. Ensure that the NVDIMM devices are configured for DAX mode.
Storage Pool Optional section of the CSV file that lets you configure properties for each of the
Configuration section Storage Pools to which the SDS devices are assigned
Media Type The expected device media type in the Storage Pool: HDD or SDD
The name of the Fine Granularity Acceleration Pool associated with the Storage
Fine Granularity ACCP
Pool
Zero padding enabled on the Storage Pool (mandatory for Fine Granularity):
The compression method used in the Fine Granularity Storage Pool. Compression
method might affect performance. For more information about recommended use
cases, refer to the Getting to Know VxFlex OS Guide.
Compression Method
l None—no compression is used
l Normal—compression is enabled
A VASA provider is used (for VMware vSphere Virtual Volumes (vVols) support):
Is Vasa Provider Yes means that the node contains a VASA provider. Any other value means that
it does not contain one.
A cluster may contain a single VASA provider, three VASA providers, or none.
VASA providers must be installed on nodes where SDC is also installed.
Note: You can only add VASA providers to a system that previously contained
no VASA providers. You must begin from a state where no VASA providers
are installed in the cluster.
The port used by the VASA Provider. This value is only required when there are
Vasa Provider Port three VASA providers in the cluster. If this value is not defined, the default value
is used: 27017.
Domain If using a domain user, the name of the domain (not relevant for Linux)
Password used to log in to the node. This should be the password of the Yes
user entered in the Username column.
To authenticate with SSH instead of node passwords, see "Using SSH
Password authentication on the VxFlex OS Gateway.
Linux /etc/vmware/vsphere-client/vc-packages/vsphere-
client-serenity
Linux /etc/vmware/vsphere-client/vc-packages/scaleio
9. After you have logged in to the vSphere web client to complete the registration and you see
groupadd admin
passwd non_root
When prompted, enter the new password and then confirm it by entering it again.
4. Open the sudoers /etc/sudoers file for editing.
vi /etc/sudoers
5. Search the sudoers file for "## Same thing without a password".
6. In the line below the search result, add the text %admin ALL=(ALL) NOPASSWD: ALL to
the file.
7. Search the sudoers file for "Defaults requiretty", and replace it with Defaults !
requiretty.
8. Exit the vi editor by typing the following command to exit: :wq!
9. Create a hidden directory in the non_root user's home directory to store the SSH
configuration.
mkdir /home/non_root/.ssh
10. Copy the SSH configuration from the root user to the non_root user's directory.
3. Open the file in a text editor, and insert the following line under the [Unit] section:
[Unit]
...
After=scini.service
l Installation wizard................................................................................................................. 60
l Customizable installation using CSV file—hyper-converged system..................................... 63
l Customizable installation using CSV file—2-layer system..................................................... 70
l Install the VxFlex OS GUI...................................................................................................... 85
Installation wizard
This section describes how to use the VxFlex OS Installer wizard, the quickest way to get a VxFlex
OS system up and running. The wizard is most suitable for environments where all servers are
converged and in one Protection Domain.
Before you begin
Ensure that you have performed all the preparation tasks before using the wizard.
About this task
The wizard guides you through the following tasks:
Procedure
1. Setting up the installation.
2. Running the installation.
After installation is complete, a number of post-installation tasks must be performed,
including GUI installation, system analysis, configuration, and license activation.
b. Accept the certificate warning; alternatively, install your own certificate for the Tomcat
server.
c. Enter the default user name, admin, and the password defined when the VxFlex OS
Gateway was prepared, and then click Login.
2. Upload installation packages:
a. At the top of the Welcome window, click Packages.
You may need to re-authenticate.
b. From the Manage Installation Packages page, click Browse, and browse to the
extracted VxFlex OS component packages. Select all the required files, and then click
Open.
Select these packages for each operating system type that will be used in your VxFlex
OS system:
l MDM
l SDS
l SDC
l LIA
l RFcache XCACHE package (optional; required for Medium Granularity Storage Pool
acceleration)
l VASAPROVIDER package (optional; required for VMware vVols)
c. Click Upload.
The uploaded installation packages appear in the packages table.
3. For LIA Authentication, select Native for local authentication, or LDAP for LDAP
authentication.
l If you selected Native, for LIA Password, type a password in the password box, and
type it again in the confirm password box.
The LIA password is a new password that will be used to authenticate communication
between the VxFlex OS Installer and the Light Installation Agent (LIA). It must meet the
same criteria as the MDM password, as listed in the previous step.
l If you selected LDAP, you can add the credentials for one LDAP server. After
deployment, up to seven more LDAP servers can be added. For more information, see
the "Security" section of the Configure and Customize VxFlex OS Guide.
Enter the required information in the additional fields that are displayed:
n LDAP Server URI
n LDAP Group and LDAP BaseDN
n LDAP User Name and LDAP Password
4. Review the end user license agreement and select the check box to accept the license
agreement.
For complete information on licensing, see the "Licensing" section of this guide.
5. In the Topology section, enter server information:
a. For each node, click on the appropriate cells to edit its IP address, select its host
operating system, and enter an MDM name and password.
l For Linux, type the password of the root user.
l To see the passwords, select Show passwords while editing.
8. When the query phase is complete, close the status message, and at the top right side of
the screen, click Start upload phase.
The Install - upload screen appears, displaying which VxFlex OS packages are being
uploaded to each server, and the status of each command.
9. When the previous phase is complete, close the status message, and at the top right side of
the screen, click Start install phase.
The Install - install screen appears, displaying the status of the installation commands.
10. When all Install command rows are in Completed status, click Start configure phase.
The Install - configure screen displays configuration progress.
11. When all processes are finished, click Mark operation completed.
The VxFlex OS system installation is complete! The wizard installation creates one
Protection Domain and one Storage Pool, both named "default". You will use these when
you enable storage.
The post-installation notes appear, directing you to the steps necessary to start using your
storage.
Procedure
1. Log in to the VxFlex OS Installer:
a. Point your browser to this URL: https://<GW_Server_IP>
where <GW_Server_IP> is the IP address of the server where you installed the VxFlex
OS Gateway package.
b. Accept the certificate warning; alternatively, install your own certificate for the Tomcat
server.
c. Enter the default user name, admin, and the password defined when the VxFlex OS
Gateway was prepared, and then click Login.
2. Upload installation packages:
a. At the top of the Welcome window, click Packages.
You may need to re-authenticate.
b. From the Manage Installation Packages page, click Browse, and browse to the
extracted VxFlex OS component packages. Select all the required files, and then click
Open.
Select these packages for each operating system type that will be used in your VxFlex
OS system:
l MDM
l SDS
l SDC
l LIA
l RFcache XCACHE package (optional; required for Medium Granularity Storage Pool
acceleration)
l VASAPROVIDER package (optional; required for VMware vVols)
c. Click Upload.
The uploaded installation packages appear in the packages table.
Results
You are now ready to start the installation.
2. For LIA Authentication, select Native for local authentication, or LDAP for LDAP
authentication.
Note: When using the VxFlex OS Installer to extend (rather than to install) a system,
enter the LIA credentials that you entered during the initial installation.
l If you selected Native, for LIA Password, enter a password in the password field, and
enter it again in the confirm password field.
The LIA password is a new password that will be used to authenticate communication
between the VxFlex OS Installer and the Light Installation Agent (LIA). It must meet the
same criteria as the MDM password, as listed in the previous step.
Note: When using the VxFlex OS Installer to extend (rather than to install) a system,
enter the LIA credentials that you entered during the initial installation.
l If you selected LDAP, you can add the credentials for one LDAP server. After
deployment, up to seven more LDAP servers can be added. For more information, see
the "Security" section of the Configure and Customize VxFlex OS Guide.
Enter the required information in the additional fields that are displayed:
n LDAP Server URI
n LDAP Group and LDAP BaseDN
n LDAP User Name and LDAP Password
3. Review the end user license agreement and select the check box to accept the license
agreement.
For complete information on licensing, see the "Licensing" section of this guide.
4. Advanced options can be configured now. The following list describes the available options.
Click Set advanced options to access these options and configure them.
Configuration options Enable alert service and Enable the alert service
create lockbox required for SNMP and SRS
reporting. This also creates
and configures the lockbox.
This is the best practice
method for creating the
lockbox.
Note: It is also possible to
manually create the
lockbox later, but the
Security options Disable secure communication Disable the need for secure
with the MDMs communication mode
between management clients
and the MDM cluster
members.
Note: Non-secure
communication has
security implications, and
should be used with
caution. For more
information about this
mode, see "Use SCLI in
non-secure mode" in the
Configure and Customize
VxFlex OS Guide.
Gateway IP address is
automatically added to the list
and does not have to be
explicitly mentioned. If the
VxFlex OS Gateway uses
multiple IP addresses, enter
all the addresses.
5. You can use the VxFlex OS Installer to configure Syslog event reporting. You can also
configure these features after installation, using the CLI. To configure Syslog reporting now,
select Configure the way Syslog events are sent, and enter the following parameters:
a. Syslog Server
The host name or IP address of the syslog server where the messages should be sent.
Enter up to two server names or addresses, comma-separated.
b. Port
The port of the syslog server (default 1468)
c. Syslog Facility
The facility level (default: Local0)
6. Review the displayed information, by scrolling through the tables, and by hovering over the
blue hyperlinks. To view the password that will be configured for a specific component, click
show.
If you have not done so yet, keep a record of all the passwords that you have assigned to
the system components.
7. Scroll down to the Storage Pool Configuration section of the table, select values for the
following items. If the CSV file contained values in the Storage Pool Configuration section,
the values will already appear here. These values are mandatory.
l The expected Device Media Type
l The External Acceleration type (if used):
n none—No devices are accelerated by a non-VxFlex OS read or write cache
n read—All devices are accelerated by a non-VxFlex OS read cache
n write—All devices are accelerated by a non-VxFlex OS write cache
n read_and_write—All devices are accelerated by both non-VxFlex OS read and write
cache
This input is required in order to prevent the generation of false alerts for media type
mismatches. For example, if an HDD device is added which the SDS perceives as being
too fast to fit the HDD criteria, alerts might be generated. External acceleration/caching
is explained in the Getting to Know VxFlex OS Guide.
l Data Layout type (use with caution; data layout type cannot be changed after Storage
Pool creation)
l If Fine Granularity data layout is selected, select a Fine Granularity ACCP (Acceleration
Pool), and a Compression Method
l If Medium Granularity data layout is selected, the Zero Padding option is available for
selection. Use with caution, because zero padding cannot be changed after Storage Pool
creation.
Note: Fine Granularity can only be selected if the Storage Pool contains NVDIMMs and
SSDs.
Results
Installation begins. Proceed to the next task to perform the remaining mandatory steps in the
installation process.
3. When the upload phase is complete, close the status message, and at the top right side of
the screen, click Start install phase.
The Install - install screen appears, displaying the status of the installation commands.
4. When the install phase is complete, close the status message, and at the top right side of
the screen, click Start configure phase.
The Install - configure screen appears, displaying the status of the configuration
commands.
5. When all processes are finished, close the status message, and click Mark operation
completed.
b. Accept the certificate warning; alternatively, install your own certificate for the Tomcat
server.
c. Enter the default user name, admin, and the password defined when the VxFlex OS
Gateway was prepared, and then click Login.
2. Upload installation packages:
a. At the top of the Welcome window, click Packages.
You may need to re-authenticate.
b. From the Manage Installation Packages page, click Browse, and browse to the
extracted VxFlex OS component packages. Select all the required files, and then click
Open.
Select these packages for each operating system type that will be used in your VxFlex
OS system:
l MDM
l SDS
l SDC
l LIA
l RFcache XCACHE package (optional; required for Medium Granularity Storage Pool
acceleration)
l VASAPROVIDER package (optional; required for VMware vVols)
c. Click Upload.
The uploaded installation packages appear in the packages table.
Results
You are now ready to start the installation.
2. For LIA Authentication, select Native for local authentication, or LDAP for LDAP
authentication.
Note: When using the VxFlex OS Installer to extend (rather than to install) a system,
enter the LIA credentials that you entered during the initial installation.
l If you selected Native, for LIA Password, enter a password in the password field, and
enter it again in the confirm password field.
The LIA password is a new password that will be used to authenticate communication
between the VxFlex OS Installer and the Light Installation Agent (LIA). It must meet the
same criteria as the MDM password, as listed in the previous step.
Note: When using the VxFlex OS Installer to extend (rather than to install) a system,
enter the LIA credentials that you entered during the initial installation.
l If you selected LDAP, you can add the credentials for one LDAP server. After
deployment, up to seven more LDAP servers can be added. For more information, see
the "Security" section of the Configure and Customize VxFlex OS Guide.
Enter the required information in the additional fields that are displayed:
n LDAP Server URI
n LDAP Group and LDAP BaseDN
n LDAP User Name and LDAP Password
3. Review the end user license agreement and select the check box to accept the license
agreement.
For complete information on licensing, see the "Licensing" section of this guide.
4. Advanced options can be configured now. The following list describes the available options.
Click Set advanced options to access these options and configure them.
Configuration options Enable alert service and Enable the alert service
create lockbox required for SNMP and SRS
reporting. This also creates
and configures the lockbox.
This is the best practice
method for creating the
lockbox.
Note: It is also possible to
manually create the
lockbox later, but the
manual method involves
security considerations.
Security options Disable secure communication Disable the need for secure
with the MDMs communication mode
between management clients
and the MDM cluster
members.
Note: Non-secure
communication has
security implications, and
should be used with
caution. For more
information about this
mode, see "Use SCLI in
non-secure mode" in the
Configure and Customize
VxFlex OS Guide.
5. You can use the VxFlex OS Installer to configure Syslog event reporting. You can also
configure these features after installation, using the CLI. To configure Syslog reporting now,
select Configure the way Syslog events are sent, and enter the following parameters:
a. Syslog Server
The host name or IP address of the syslog server where the messages should be sent.
Enter up to two server names or addresses, comma-separated.
b. Port
The port of the syslog server (default 1468)
c. Syslog Facility
The facility level (default: Local0)
6. Review the displayed information, by scrolling through the tables, and by hovering over the
blue hyperlinks. To view the password that will be configured for a specific component, click
show.
If you have not done so yet, keep a record of all the passwords that you have assigned to
the system components.
7. Scroll down to the Storage Pool Configuration section of the table, select values for the
following items. If the CSV file contained values in the Storage Pool Configuration section,
the values will already appear here. These values are mandatory.
l The expected Device Media Type
l The External Acceleration type (if used):
n none—No devices are accelerated by a non-VxFlex OS read or write cache
n read—All devices are accelerated by a non-VxFlex OS read cache
n write—All devices are accelerated by a non-VxFlex OS write cache
n read_and_write—All devices are accelerated by both non-VxFlex OS read and write
cache
This input is required in order to prevent the generation of false alerts for media type
mismatches. For example, if an HDD device is added which the SDS perceives as being
too fast to fit the HDD criteria, alerts might be generated. External acceleration/caching
is explained in the Getting to Know VxFlex OS Guide.
l Data Layout type (use with caution; data layout type cannot be changed after Storage
Pool creation)
l If Fine Granularity data layout is selected, select a Fine Granularity ACCP (Acceleration
Pool), and a Compression Method
l If Medium Granularity data layout is selected, the Zero Padding option is available for
selection. Use with caution, because zero padding cannot be changed after Storage Pool
creation.
Note: Fine Granularity can only be selected if the Storage Pool contains NVDIMMs and
SSDs.
3. When the upload phase is complete, close the status message, and click Start install phase.
The Install - install page appears, displaying the status of the installation commands.
4. When the install phase is complete, close the status message, and click Start configure
phase.
The Install - configure page appears, displaying the status of the configuration commands.
5. When all processes are finished, close the status message, and click Mark operation
completed.
The VxFlex OS system installation is complete!
Note: Marking the operation completed signals to the VxFlex OS Installer that it can
now be used for other installations, upgrades, and so on.
The post-installation notes appear, directing you to the steps necessary to start using your
storage.
VxFlexOSPluginSetup-3.0-X.<build>.ps1
From the vSphere Home tab, verify that the VxFlex OS icon is visible in the
Inventories section.
Results
The VxFlex OS plug-in is registered. If the VxFlex OS icon is missing, the vCenter server failed to
register the VxFlex OS plug-in, due to one of the following reasons:
l Connectivity problem between the vSphere web client server and the host storing the VxFlex
OS plug-in (for example, network / firewall etc.).
Resolution: Verify that there is communication between the vSphere web client server and the
host storing the VxFlex OS plug-in.
l URL problem when using an external web server.
Resolution: Verify that the URL is https:// and is pointing to the correct web server IP address
(VxFlex OS Gateway).
For information about how to use the log to troubleshoot problems that may arise, see
"Troubleshooting VxFlex OS plug-in registration issues".
f. Click Run.
The status appears in the dialog box.
g. When finished, click Finish.
h. You must restart the ESXi hosts before proceeding.
2. Register the system:
Install the SDC on an ESXi server and connect it to VxFlex OS using esxcli
Install the SDC with the appropriate parameters to connect it to an existing VxFlex OS system.
This procedure is relevant both for adding more SDCs to an existing system, and for adding SDCs
to a 2-layer system during initial deployment activities.
Before you begin
Ensure that you have:
l The virtual IP address or MDM IP address of the existing system. If an MDM virtual IP address
is not in use, obtain the IP addresses of all the MDM managers.
l Login credentials for the intended SDC host
l The required installation software package for your SDC's operating system (available from the
zipped software packages that can be downloaded from the Customer Support site)
l A GUID string for the SDC. These strings can be generated by tools that are freely available
online. The GUID needs to conform to OSF DCE 1.1. The expected format is xxxxxxxx-xxxx-
xxxx-xxxx-xxxxxxxxxxxx where each x can be a digit (0–9) or a letter (a–f).
About this task
The following procedure explains how to manually install an external SDC on an ESXi server using
esxcli in command line. Alternatively, you can install the external SDC using the vSphere VxFlex OS
plug-in.
Note: This procedure requires two server reboots.
Procedure
1. On the ESXi on which you are installing the SDC, set the acceptance level:
where <SERVER_NAME> is the ESXi on which you are installing the SDC.
2. Install the SDC:
where
l <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP addresses or the
virtual IP address of the MDM
l <XXXXXX> is the user-generated GUID string
OS Modifications required
Ubuntu/hLinux/OL Before installing VxFlex OS, ensure that you have followed the
required preparation procedures relating to various types of Linux
operating systems.
l VxFlex OS component packages are delivered as TAR files. Before
installing, perform the following:
CoreOS Before installing VxFlex OS, ensure that you have followed the
required preparation procedures relating to various types of Linux
operating systems.
l VxFlex OS component CoreOS packages are delivered as TAR
files. Before installing, perform the following:
Procedure
1. Install the GPG key on every server on which SDC will be installed. From the VxFlex OS
installation folder, run the following command on every server:
l CoreOS
MDM_IP=<LIST_VIP_MDM_IPS> ./<LIST_VIP_MDM_IPS>.bsx
where
l <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP addresses or the
virtual IP address of the MDM
l <SDC_PATH> is the path where the SDC installation package is located
Results
The SDC is installed on the Linux server and is connected to VxFlex OS.
After you finish
In newly deployed systems, perform the post-deployment tasks described in this guide. It is highly
recommended to run the VxFlex OS system analysis tool to analyze the system immediately after
deployment, before you provision volumes, and before using the system in production.
In existing systems, map volumes to the new SDCs that you added to the system.
where
l <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP addresses or the
virtual IP address of the MDM.
l <SDC_PATH> is the path where the SDC installation package is located.
The SDC package is in a format similar to this: EMC-ScaleIO-sdc-3.0-
X.<build>.aix7.aix7.2.ppc.rpm
Results
The SDC is installed on the AIX server and is connected to VxFlex OS.
Procedure
1. On the Windows server on which you are installing the SDC, run the following command in
command line:
where
l <SDC_PATH> is the path where the SDC installation package is located
l <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP addresses or the
virtual IP address of the MDM
2. Get permission to reboot the Windows server, and perform a reboot to load the SDC driver
on the server.
Results
The SDC is installed on the Windows server and is connected to VxFlex OS.
After you finish
In newly deployed systems, perform the post-deployment tasks described in this guide. It is highly
recommended to run the VxFlex OS system analysis tool to analyze the system immediately after
deployment, before you provision volumes, and before using the system in production.
In existing systems, map volumes to the new SDCs that you added to the system.
EMC-ScaleIO-gui-3.0-X.<build>.msi
l Linux:
rpm -i EMC-ScaleIO-gui-3.0-X.<build>.noarch.rpm
l Mac OS:
Double-click the installation file VxFlexOS-gui-3.0-X.<build>.pkg. If the provided
installer does not open automatically, open it, then follow the on-screen instructions.
This section summarizes deployment options and steps for VMware ESXi-based environments.
l Deployment workflow—ESXi................................................................................................ 90
l Deployment checklist—ESXi.................................................................................................90
l Preparation of NVDIMMs for use in VxFlex OS in ESXi environments................................... 93
Deployment workflow—ESXi
The VxFlex OS deployment workflow for ESXi-based systems is described in this summary. The
vSphere VxFlex OS plug-in deployment wizard is used to deploy and configure VxFlex OS
components on ESXi servers automatically, from one workstation.
Although RDM architecture is supported, DirectPath architecture is the recommended best
practice for RAID and SAS Controller managed drives.
For 2-layer systems where MDM and SDS components are installed on Linux servers, and SDC
components are installed on ESXi-based servers, follow the procedures described for Linux 2-layer
systems.
For VxFlex Ready Node or VxRack Node 100 Series systems, refer to your product's Hardware
Configuration and Operating System Installation Guide and Deployment Guide for step-by-step
installation instructions.
The main steps in the VxFlex OS deployment workflow are:
1. Prepare the environment.
2. Register the VxFlex OS plug-in.
3. Upload the OVA template.
4. Deploy VxFlex OS.
5. For DirectPath mode deployments, add storage/acceleration devices.
6. For some 2-layer system configurations, install SDCs on external servers.
7. Run the VxFlex OS system analysis tool.
8. Install the VxFlex OS GUI.
9. Enable storage.
10. Perform mandatory, recommended, and optional post-deployment tasks.
11. Move the system to production.
Note: It is highly recommended to use the VxFlex OS plug-in deployment wizard for
deployment and provisioning. If the need arises, you can perform these tasks manually, or in a
combination with the wizard.
Deployment checklist—ESXi
Checklist for deployment requirements in ESXi-based environments.
The following table provides a checklist of items required for VxFlex OS deployment.
Item Checked
General
All servers intended for hosting VxFlex OS components and the VxFlex OS
Gateway meet the system requirements described in the Getting to Know VxFlex
OS guide.
Note: The Dell EMC-supplied hardware configurations satisfy all of the
hardware requirements.
Item Checked
Familiarize yourself with the ESX vStorage APIs for Array Integration (VAAI)
features supported by the system. For more information, see the "Architecture"
section in Getting to Know VxFlex OS.
To use secure authentication mode, ensure that OpenSSL 64-bit v1.0.2 or later is
installed on all servers in the system.
Gateway
The server that will host the VxFlex OS Gateway has adequate memory to run the
VxFlex OS Installer (at least 4 GB) and any other applications.
The VxFlex OS Gateway must not be installed on a server that will host either the
SDC component, or on an SDS component that will use RFcache acceleration
feature.
Note: In hyper-converged infrastructure setups, the VxFlex OS Gateway
component is installed on a separate storage VM (automatic when
deployment is done using the vSphere VxFlex OS plug-in).
ESXi
The host from which you run the PowerShell (.ps1) script, where the vSphere
VxFlex OS plug-in will be stored, meets the following prerequisites:
l Runs on Windows, with 64-bit Java installed
l PowerCLI from VMware (not Windows PowerShell) is installed. PowerCLI
version should match the version of the vCenter (for example, use PowerCLI
v6.5 with vCenter v6.7).
l Has incoming and outgoing communication access to the vCenter
Note: If your vCenter runs on Windows OS, it is recommended to use it.
All ESXi hosts selected to have either an MDM, Tie-Breaker, or SDS component
installed on them, must have a defined local datastore, with a minimum of 10 GB
free space (to be used for the storage virtual machine (SVM)). If the ESXi host is
only being used as an SDC, there is no need for this datastore.
Note: If you are planning to host the SVM template on one of the nodes, you
will need an additional 8 GB of space. If you are planning to deploy the VxFlex
OS Gateway SVM on a node that will also host another SVM, you will need an
additional 8 GB of space.
Storage/acceleration devices
Configure disks so they are visible to the operating system according to operating
system and vendor hardware guidelines.
Plan your system so that storage VMs (SVMs) with SDS and MDM roles reside
on the local datastore, because moving them after deployment is not supported.
If you are planning to use DirectPath, ensure that you enable the relevant setting
in the Server BIOS (in Dell servers, refer to SR-IOV setting).
Item Checked
A minimum of three devices must be added to each SDS, that all meet the
following prerequisites:
l A minimum of 130 GB available storage capacity.
l The devices must be free of partitions.
l If a device is part of a datastore, before adding the device, you must either
remove the datastore, or use the VxFlex OS plug-in Advanced settings
option to enable VMDK creation.
l If the device has the ESXi operating system on it, you must use the VxFlex
OS plug-in Advanced settings option to enable VMDK creation.
Note: Use of VMDK-based disks for storage devices is not recommended, and
should be used only if no other option is possible.
When using the operating system's factory ISO, do not rename the datastore.
Leave the datastore name as it is ("scaleio-datastore").
Networking
If you are using a firewall to protect your network, and need to enable ports and
IP addresses, refer to the VxFlex OS Security Configuration Guide.
The management network on all of the ESXi hosts that are part of the VxFlex OS
system must have the following items configured:
l Virtual Machine Port Group. (The name must be the same on all the ESXi
hosts.)
l When using distributed switches, the vDS must have the following items
configured:
n VMkernel port (necessary only if using a single network)
n dvPortGroup for virtual machines
Item Checked
2. Alternatively, you can calculate NVDIMM capacity and RAM capacity using the following
formulas:
Note:
The calculation is in binary MiB, GiB, and TiB
Prepare the software packages, register the vSphere VxFlex OS plug-in, upload the OVA file, and
prepare the ESXi servers.
Note: The OVA file upload is not necessary for VxFlex Ready Node servers.
4. From the VxFlex OS <version> Windows directory, save the software artifacts for the
VxFlex OS GUI.
The GUI will be installed and used after deployment to configure some of the system
features.
3. Using PowerCLI for VMware, set to Run as administrator, run the following script:
VxFlexOSPluginSetup-3.0-X.<build>.ps1
From the vSphere Home tab, verify that the VxFlex OS icon is visible in the
Inventories section.
Results
The VxFlex OS plug-in is registered. If the VxFlex OS icon is missing, the vCenter server failed to
register the VxFlex OS plug-in, due to one of the following reasons:
l Connectivity problem between the vSphere web client server and the host storing the VxFlex
OS plug-in (for example, network / firewall etc.).
Resolution: Verify that there is communication between the vSphere web client server and the
host storing the VxFlex OS plug-in.
l URL problem when using an external web server.
Resolution: Verify that the URL is https:// and is pointing to the correct web server IP address
(VxFlex OS Gateway).
For information about how to use the log to troubleshoot problems that may arise, see
"Troubleshooting VxFlex OS plug-in registration issues".
a. On the computer where the vSphere Web client is installed, locate the
webclient.properties file.
Linux /etc/vmware/vsphere-client/
-userUrl
"https://10.76.61.139/sample/ScaleIO-vSphere-web-plugin-1.30.0.160.zip"
-thumbprint CA:66:49:D0:CE:D9:8C:A0:D0:93:E3:83:DE:59:25:5F:79:E1:53:B6
-adminEmail [email protected]
The script registers the plug-in and the following message appears:
l Parallelism limit
Enables the increase of the parallelism limit (default: 100), thus speeding up the deployment,
which can be useful in deployment of a very large system (several hundred nodes). This is
dependent on the processing power of the vCenter.
Linux /var/log/vmware/vsphere-client/logs
For more information about log collection, refer to VxFlex OS Log Collection Technical Notes
4. Delete the contents of the folder where the VxFlex OS plug-in is stored.
5. Delete the contents of the VxFlex OS plug-in's logs folder, or the folder itself.
On Linux, for example: /etc/vmware/vsphere-client/vc-packages/scaleio
6. Clean the Virgo logs folder:
On Linux, for example: /var/log/vmware/vsphere-client/logs
7. Start the vSphere web client service.
For example, on Linux:
8. Clear your web browser's cache/cookies (or use a different web browser).
9. Using the PS1 script in PowerCLI, register the VxFlex OS plug-in via PowerCLI remember
Note: Do not to press ENTER until you perform the log-in into your vSphere web client
(that completes the registration).
10. Once you see the VxFlex OS plug-in's icon in the vSphere web client, you can press ENTER
in the PowerCLI session.
VxFlexOSPluginSetup-3.0-<build>.X.ps1
Parameter Description
Note:
For best results, enter a local (not shared) datastore for each ESX server.
For faster, parallel, deployment in large-scale environments, you can use the OVA to
create SVM templates on as many as eight datastores. To do so, type the datastore
names, and when you are done, leave the next line blank. The following example shows
how to enter two datastores:
datastores[0]: datastore1
datastores[1]: datastore1 (1)
datastores[2]:
The upload procedure can take several minutes, during which time a temporary SVM is
created, the templates are created, and then the temporary SVM is deleted.
When each template is created, a message similar to the following appears:
d. When the process is complete, type 4 to exit the VxFlex OS plug-in script.
2. Select the ESXi hosts, and select the settings required for each.
Note:
For the Install SDC option, it is highly recommended to select all ESXi hosts that may
be included in an ESXi system, even if only in the future.
The Select the SDC driver file you want to install dialog box is displayed.
4. Select the required SDC driver file, and click OK.
The SDC driver is installed on the selected nodes.
5. Click Run.
The status appears in the dialog.
6. When finished, click Finish.
7. Restart each ESXi host.
Note: You must restart the ESXi hosts before proceeding.
After rebooting, a RAID controller that was configured with DirectPath will be displayed in
the vSphere client Configure tab, on the DirectPath I/O PCI Devices Available to VMs
screen.
a. In the VxFlex OS screen, click Advanced Settings to display the settings options.
b. Select the Enable RDMs on non Parallel SCSI Controller option and click OK.
Results
After finishing this task, the results of your selections are displayed after reopening the Pre-
Deployment Actions screen.
After you finish
Proceed with the VxFlex OS deployment.
Linux /etc/vmware/vsphere-client/vc-packages/vsphere-
client-serenity
Linux /etc/vmware/vsphere-client/vc-packages/scaleio
9. After you have logged in to the vSphere web client to complete the registration and you see
EMC-ScaleIO-gui-3.0-X.<build>.msi
l Linux:
rpm -i EMC-ScaleIO-gui-3.0-X.<build>.noarch.rpm
l Mac OS:
Double-click the installation file VxFlexOS-gui-3.0-X.<build>.pkg. If the provided
installer does not open automatically, open it, then follow the on-screen instructions.
The VxFlex OS plug-in deployment wizard operates on the assumption that you are
using the provided VxFlex OS OVA template to create the VxFlex OS virtual machines.
4. In the Add ESXi Hosts to Cluster screen, select the ESXi hosts to add as part of the
system:
a. Select the vCenter on which to deploy the VxFlex OS system.
The vCenter information is populated in the lower part of the screen.
b. Select an ESXi server to serve for each of the MDM cluster roles.
You can give a name to the MDM servers, such as Manager1, etc.
c. Select ESXi servers to serve as Standby Manager and Tie-Breaker roles (optional).
d. Click Next.
The Configure Performance, Sizing, and Syslog screen appears.
6. Configure the following settings (optional), then click Next:
l To remove the high-performance profile from components, clear their check boxes.
l To configure the allocation of storage virtual machine (SVM) RAM, select from the
following:
n To use default RAM allocation, select Standard size.
n To use custom settings, select Custom size, and type the maximum capacity and
maximum number of volumes.
l To configure syslog reporting, select Configure syslog, and type the syslog server, port
(default: 1468), and facility (default: 0).
l To configure DNS servers, type their details.
You can create (or remove) Protection Domains (PD). You must create at least one PD.
7. Create a Protection Domain:
b. Click Add.
The added PDs appear in the lower section of the screen, together with the existing PDs.
To remove a newly created PD, select it and click Remove.
c. To create additional PD, repeat this step.
d. Click Next.
The Configure Acceleration Pools screen appears. In this screen, you can create one or
more Acceleration Pools, which will be used to accelerate storage.
8. Create an Acceleration Pool:
a. Enter the Acceleration Pool name.
b. Select the Protection Domain to which the Acceleration Pool will belong.
c. Click Add, and then Next.
The Create a new Storage Pool screen appears.
In the Configure Storage Pools screen, you can create (or remove) Storage Pools (SP).
You must create at least one SP.
9. Create a Storage Pool:
a. Type the Storage Pool name: It is recommended to use meaningful names.
b. Select the PD to which to add the SP.
c. Select the expected Device Media Type of the devices that will be added to the SP.
d. Select the External Acceleration type (if used):
l none—No devices are accelerated by a non-VxFlex OS read or write cache
l read—All devices are accelerated by a non-VxFlex OS read cache
l write—All devices are accelerated by a non-VxFlex OS write cache
l read_and_write—All devices are accelerated by both non-VxFlex OS read and write
cache
e. To enable zero padding, select Enable zero padding. Zero padding must be enabled for
using the background scanner in data comparison mode.
f. To enable Read Flash cache, select Enable RFcache.
g. Click Add.
The added SPs appear in the lower section of the screen, together with the existing PDs.
To remove a newly created SP, select it and click Remove.
h. To create additional SPs, repeat this step.
i. Click Next.
The Create Fault Sets screen appears. You can use this screen to create Fault Sets
(optional).
Note: When defining Fault Sets, you must follow the guidelines described in the
"Architecture" section of the Getting to Know VxFlex OS guide. Failure to do so may
prevent creation of volumes.
10. Create a Fault Set (optional):
a. Type the Fault Set name. It is recommended to use meaningful names.
b. Select to which PD to add the Fault Set.
c. Click Add
Added Fault Sets appear in the lower section of the screen, inside the folder of the
parent PD. You can remove a newly created Fault Set by selecting it and clicking
Remove.
d. Repeat to create additional Fault Sets (a minimum of three is required), then click Next.
The Add SDSs screen appears.
11. Configure the following for every ESXi host or SVM, then click Next:
a. For every SVM in a DirectPath deployment, you must select SDS and assign a
Protection Domain.
Note: To make the same selections for every ESX in a cluster, you can make your
selections per cluster or datacenter.
d. Click Next.
Adding devices to SDS is done after the deployment is complete.
b. Choose whether to enable or disable the LUN comparison for ESXi hosts.
In general, in environments where the SDC is installed on ESXi and also on physical
hosts, you should set this to Disable.
Note: Before disabling LUN comparison, consult your environment administrator.
c. Click Next.
The Configure Upgrade Components dialog box appears.
13. Configure the VxFlex OS Gateway and Lightweight Installation Agent (LIA):
b. Type and confirm a password for the VxFlex OS Gateway administrative user.
c. Type and confirm a password for the LIA.
The password must be the same across all SVMs in the system.
d. Click Next.
Note: You can only move to the next step if the passwords meet the listed criteria,
and if the confirmation passwords match the entered passwords.
b. Type and confirm a new root password that will be used for all SVMs to be created.
c. Click Next.
The Configure Networks screen appears:
15. Select the network configuration. You can select an existing (simple or distributed) network,
or select Create a new network.
The Create a new network command is only relevant for a regular vSwitch, and not for a
distributed vSwitch.
You can use a single network for management and data transfer or separate networks.
Separating the networks is recommended for security and increased efficiency. You can
select one data network or two data networks.
The management network, used to connect and manage the SVMs, is normally connected to
the client management network, a 1 GB network.
The data network is internal, enabling communication between the VxFlex OS components,
and is recommended to be at least a 10 GB network.
For high availability and performance, it is recommended to have two data networks.
Note: The selected networks must have communication with all of the system nodes. In
some cases, while the wizard does verify that the network names match, this does not
guarantee communication, as the VLAN IDs may have been manually altered.
a. To use one network, select a protocol (IPv4 or IPv6) and a management network, click
Next and proceed with SVM configuration.
For best results, it is highly recommended to use the plug-in to create the data networks,
as opposed to creating them manually.
b. To use separate networks, select a protocol (IPv4 or IPv6) for the management network
label and one or two data network labels. If the data network already exists (such as a
customer pre-configured distributed switch or a simple vswitch), select it from the drop-
down box. Otherwise, configure the data network by clicking Create new network.
The Create New Data Network screen appears:
Note: You can click to auto-fill the values for Data NIC and VMkernel IP.
d. Click OK.
The data network is created.
The wizard will automatically configure the following for the data network:
l vSwitch
l VMkernel Port
l Virtual Machine Port Group
l VMkernel Port Binding
e. Click Next.
The Configure SVM screen appears.
Icons indicate the role that the server plays in the VxFlex OS system. You can select
to auto-fill the values for IP addresses.
The Review Summary screen appears.
Note: If you intend to enable zero padding on a Storage Pool, you must do so before you add
any devices to the Storage Pool.
Procedure
1. From the SDSs screen of the VxFlex OS plug-in, select one of the following:
l Right-click a specific SDS, and then select Add devices to a single SDS.
l Right-click any SDS, and then select Add devices to VxFlex OS system.
The Add Device dialog box is displayed. All devices that can be attached to the selected
SDS are listed. For the system view, all SDSs are listed, and you can choose devices to add
for each SDS. It may take a few moments to load the list of devices from the vCenter.
2. Add devices:
l One-at-a-time:
a. Select whether the device should be used for storage or to provide acceleration.
b. Select the Storage Pool to which the devices should be assigned.
c. To enable the use of devices that may have been part of a previous VxFlex OS
system, select Allow the take over of devices with existing signature.
d. Click OK.
l All devices on a server at once:
a. Click Select all devices.
b. Select whether to use the devices for storage or to provide acceleration.
c. Select the Storage Pool to which the devices should be assigned.
d. To enable the use of devices that may have been part of a previous VxFlex OS
system, select Allow the take over of devices with existing signature.
e. Click Assign.
3. Confirm the action by typing the VxFlex OS password.
4. When the add operation is complete, click Close.
Results
The devices are added.
n Allow the taking over of devices that were used in other VxFlex OS systems.
n Allow the use of non-local datastores for the VxFlex OS Gateway.
n Increase the parallelism limit.
To access these settings, click Advanced settings on the VxFlex OS screen.
About this task
For 2-layer systems where only the SDCs are deployed on ESXi servers, follow the deployment
procedures for 2-layer systems.
Procedure
1. From the Basic tasks section of the screen, click Deploy VxFlex OS environment.
The VxFlex OS VMware deployment wizard begins. If you exited the previous deployment
before completion, you will be able to return from where you left off.
NOTICE When you use the deployment wizard, it is assumed that you are using the
provided VxFlex OS OVA template to create the VxFlex OS virtual machines.
2. In the Select Installation type screen, start the deployment of a new system:
a. Select Create new VxFlex OS system.
b. Review and approve the license terms.
c. Click Next.
3. In the Create new system screen, enter the following, then click Next:
l System Name: Enter a unique name for this system.
l Admin Password: Enter and confirm a password for the VxFlex OS admin user. The
password must meet the listed criteria.
4. In the Add ESX Hosts to Cluster screen, select the ESXi hosts to add as part of the
system:
a. Select the vCenter on which to deploy the VxFlex OS system.
The vCenter information is populated in the lower part of the screen.
b. Expand the vCenter, select the ESXi hosts to add to the VxFlex OS system, then click
Next.
Note: To configure VxFlex OS, you must select a minimum of three ESXi hosts. ESXi
hosts that do not have the SDC installed, or hosts for which DirectPath was
configured before deployment, but DirectPath was not selected in the previous step,
will not be available.
b. Select an ESXi server to serve for each of the MDM cluster roles.
You can give a name to the MDM servers, such as Manager1, and so on.
c. Select ESXi servers to serve as Standby Manager and Tie-Breaker roles (optional).
d. Click Next.
b. Click Add.
The added PDs appear in the lower section of the screen, together with the existing PDs.
To remove a newly created PD, select it and click Remove.
c. To create an additional PD, repeat this step.
d. Click Next.
The Configure Acceleration Pool screen appears. In this screen, you can create an
Acceleration Pool, which will be used to accelerate storage.
8. Create an Acceleration Pool:
a. Enter the Acceleration Pool name.
b. Select the Protection Domain to which the Acceleration Pool will belong.
c. Click Add, and then Next.
The Create a new Storage Pool screen appears.
In the Configure Storage Pools screen, you can create (or remove) Storage Pools (SP).
You must create at least one SP.
9. Create a Storage Pool:
a. Enter the Storage Pool name: It is recommended to use meaningful names.
b. Select to which PD to add the SP.
c. Select the expected Device Media Type for the SP (HDD or SSD).
d. Select the External Acceleration type (if used):
l none—No devices are accelerated by a non-VxFlex OS read or write cache
l read—All devices are accelerated by a non-VxFlex OS read cache
l write—All devices are accelerated by a non-VxFlex OS write cache
e. To enable zero padding, select Enable zero padding. Zero padding must be enabled for
using the background scanner in data comparison mode.
f. To enable Read Flash cache, select the Enable RFcache check box.
g. Click Add.
This input is required in order to prevent the generation of false alerts for media type
mismatches. For example, if an HDD device is added which the SDS perceives as being
too fast to fit the HDD criteria, alerts might be generated. External acceleration/caching
is explained in the Getting to Know VxFlex OS Guide.
The added SPs appear in the lower section of the screen, together with the existing PDs.
To remove a newly created SP, select it and click Remove.
h. To create additional SPs, repeat this step.
i. Click Next.
The Create Fault Sets screen appears. You can use this screen to create Fault Sets
(optional).
Note: When defining Fault Sets, you must follow the Fault Set guidelines described in
the Getting to Know VxFlex OS guide. Failure to do so may prevent creation of volumes.
10. Create a Fault Set (optional):
a. Enter the Fault Set name. It is recommended to use meaningful names.
b. Select to which PD to add the Fault Set.
c. Click Add
Added Fault Sets appear in the lower section of the screen, inside the folder of the
parent PD. You can remove a newly created Fault Set by selecting it and clicking
Remove.
d. Repeat these steps to create additional Fault Sets (minimum of three), then click Next.
The Add SDSs screen appears.
11. Configure the following for every ESXi host or SVM:
a. Select the corresponding SDS check box to assign an SDS role.
Note: To make the same selections for every ESXi in a cluster, you can make your
selections per cluster or datacenter.
12. On the Information tab, select an ESXi host from a cluster, then click Assign devices.
The Assign devices tab appears.
This screen shows the devices whose free space can be added to the selected ESXi host/
SDS. You should balance the capacity over the selected SDS.
c. Click Assign.
14. To replicate selections to other SDSs, perform the following:
a. Select the Replicate selection tab.
b. Select the ESXi whose device selection you wish to replicate.
This is the source ESXi.
c. Select the target ESXis to which to replicate the selection of the source ESXi.
d. Click Copy configuration.
The results are displayed in the right pane of the screen.
15. When you have selected devices for all SDSs, click Next.
Note: You must select at least one device for each SDS.
b. Choose whether to enable or disable LUN number comparison for ESXi hosts.
In general, in environments where the SDC is installed on ESXi and also on physical
hosts, you should set this to Disable.
Note: Before disabling LUN number comparison, consult your environment
administrator.
c. Click Next.
The Configure Upgrade Components dialog box appears.
17. Configure the VxFlex OS Gateway and LIA:
a. Select an ESXi to host the VxFlex OS Gateway Storage virtual machine (SVM).
A unique SVM will be created for the VxFlex OS Gateway.
If the previously-selected ESXi servers do not have sufficient free space (on any
datastore) to contain the VxFlex OS SVM template, an SVM, and the VxFlex OS
Gateway SVM, you will not have an option to select an ESXi in this step. It will be done
automatically.
b. Enter and confirm a password for the VxFlex OS Gateway administrative user.
c. Enter and confirm a password for the LIA.
The password must be the same across all SVMs in the system.
d. Click Next.
Note: You can only move forward if the passwords meet the listed criteria, and if the
confirmation passwords match the entered passwords.
Note: If you select a custom template, ensure that it is compatible with the VxFlex
OS plug-in and the VxFlex OS MDM.
b. Enter and confirm a new root password that will be used for all SVMs to be created.
c. Click Next.
The Configure Networks screen appears:
19. Select the network configuration. You can select an existing (simple or distributed) network,
or select Create a new network.
The Create a new network command is only relevant for a regular vSwitch, and not for a
distributed vSwitch.
You can use a single network for management and data transfer, or separate networks.
Separate networks are recommended for security and increased efficiency. You can select
one data network, or two.
The management network, used to connect and manage the SVMs, is normally connected to
the client management network, a 1 GB network.
The data network is internal, enabling communication between the VxFlex OS components,
and is recommended to be at least a 10 GB network.
For high availability and performance, it is recommended to have two data networks.
Note: The selected networks must have communication with all of the system nodes. In
some cases, while the wizard does verify that the network names match, this does not
guarantee communication, as the VLAN IDs may have been manually altered.
a. To use one network, select a protocol (IPv4 or IPv6), and a management network, then
proceed to the next step, configuring the SVMs.
For best results, it is highly recommended to use the VxFlex OS plug-in to create the
data networks, as opposed to creating them manually.
b. To use separate networks, select a protocol (IPv4 or IPv6) for the management network
label, and one or two data network labels. If the data network already exists (such as a
customer pre-configured distributed switch or a simple vswitch), select it from the drop-
down box. Otherwise, configure the data network by clicking Create new network.
The Create New Data Network screen appears.
Note: You can click to auto-fill the values for Data NIC and VMkernel IP.
d. Click OK.
The data network is created.
The wizard will automatically configure the following for the data network:
l vSwitch
l VMkernel Port
l Virtual Machine Port Group
l VMkernel Port Binding
e. Click Next.
The Configure SVM screen appears.
20. Configure all the SVMs:
Note: You can click to auto-fill a range of values for IP addresses, subnet mask
and default gateway.
a. Enter the IP address, subnet mask, and default gateway for the management network,
then the data network.
b. Enter the Cluster Virtual IP address for each network interface.
c. You can select a datastore, or allow automatic selection.
d. Configure the cluster's virtual IP addresses by entering the virtual IP address for each
data network.
e. Click Next.
Icons indicate the role that the server plays in the VxFlex OS system.
The Review Summary screen appears.
21. Review the configuration.
Click Finish to begin deployment or Back to make changes.
22. Enter the vCenter user name and password, then click OK to begin the deployment.
The Deployment Progress screen appears.
During the deployment process you can view progress, pause the deployment, and view
logs.
To pause the deployment, click Pause. Steps that are already in progress will pause after
they are completed.
After pausing, select one of the following options:
l Perform the post-installation tasks described in this guide. It is highly recommended to run the
VxFlex OS system analysis tool to analyze the system immediately after deployment, before
you provision volumes, and before using the system in production.
https://10.76.60.190:443/api/login
This token is valid for 8 hours from the time it was created, unless there has been no activity for 10
minutes, or if the client has sent a logout request.
HTTP token invalidation (logout)
The invalidation is done by invoking the HTTP GET request, with the URI: /api/logout
The token mentioned above is the password in the HTTP Basic authentication (the user is ignored
- it can be empty).
For every REST API request that does not require the gatewayAdminPassword, the
authentication is done by passing the token mentioned above as the password in the HTTP Basic
authentication (the user is ignored - it can be empty).
Requests that require the gatewayAdminPassword work similarly, except that instead of /api/
login, invoke an HTTP GET request, /api/gatewayLogin with user: admin password:
<gatewayAdminPassword> in HTTP Basic authentication. A token is returned. Instead of
invoking /api/logout, invoke /api/gatewayLogout with the token received when you logged
in.
Note: Requests that require gatewayAdminPassword are:
GET:
/api/Configuration
/api/gatewayLogout
/api/getHostCertificate/{Mdm | Lia}
POST:
/api/updateConfiguration
/api/instances/System/action/createMdmCluster
/api/trustHostCertificate/{Mdm | Lia}
/api/gatewaySetSecureCommunication
Response fields
The order of the fields in the responses may change. More fields may be added in the future.
URIs
l POST (create) / GET all objects for a given type:
/api/types/{type}/instances
l GET by id:
/api/instances/{type::id}
l POST a special action on an object:
/api/instances/{type::id}/action/{actionName}
l POST a special action on a given type:
/api/types/{type}/instances/action/{actionName}
l Get current API version:
/api/version
l Every row in the Object's Parent table appears as a link in the response of get object:
/api/instances/{type::id}
l Every row in the Object's Relationships table appears as a link in the response of get object:
/api/instances/{type::id}/relationships/{Relationship name}
l GET all instances
/api/instances/
Table 8 Response
sessionTag Long
lastSystemVersion Long
lastProtectionDomainVersion Long
lastSdsVersion Long
lastStoragePoolVersion Long
lastDeviceVersion Long
lastVolumeVersion Long
lastVTreeVersion Long
lastSdcVersion Long
lastFaultSetVersion Long
lastAccelerationPoolVersion Long
lastSpSdsVersion Long
lastSnapshotPolicyVersion Long
{
"mdmAddresses":["10.76.60.150", "10.76.60.11"],
"mdmPort":"6611",
"gatewayAdminPassword":"Password1",
"systemId":"7f5d8fc72a3d7f3d" ,
"snmpSamplingFrequency":"30",
"snmpResendFrequency":"0",
"snmpTrapsReceiverIps":["10.76.60.190","10.76.60.191"],
"snmpPort":"162"
}
{
"allowNonSecureCommunication": "false",
"bypassCertificateCheck": "false",
"cipherSuites": null,
"featuresEnableCollectAutoLogs": "false",
"featuresEnableIM": "true",
"snmpResendFrequency": "0",
"snmpSamplingFrequency": "30",
"snmpPort": "162",
"mdmPort":"6611",
"mdmUsername": "alertservice",
"remoteSyslog": [
{
"hostName": "10.76.60.100",
"port": 1468,
"facility": 16
},
{
"hostName": "10.76.60.101",
"port": 1468,
"facility": 16
}
],
"mdmAddresses":[
"10.76.60.150",
"10.76.60.135"
],
"systemId":"7f5d8fc72a3d7f3d",
"snmpTrapsReceiverIps": [
"10.76.60.192"
]
}
Example:
POST https://localhost:8443/api/instances/querySelectedStatistics
Body:
{"selectedStatisticsList":[
{"type":"ProtectionDomain", "ids":["cc480c9b00000000"], "properties":
["capacityInUseInKb">,
{"type":"Volume", "ids":["022beb2500000006","022beb2300000004"],
"properties":["numOfMappedSdcs", "userDataWriteBwc">,
{"type":"Sds", "ids":["c919d82000000001","022beb2300000004"],
"properties":["capacityInUseInKb">,
{"type":"System", "allIds":"", "properties":["rmcacheSizeInKb">,
{"type":"FaultSet", "ids":["c919d82000000001","022beb2300000004"],
"properties":["numOfSds">,
{"type":"StoragePool", "allIds":"", "properties":
["unreachableUnusedCapacityInKb", "numOfThinBaseVolumes">
>
The response:
/api/getHostCertificate/{Mdm|Lia}?host={host ip}
Example:
/api/getHostCertificate/Mdm?host=10.76.60.10
Response:
The host certificate in PEM encoding.
Note: The whole certificate should be saved into a .cer file for the
trustHostCertificaterequest.
For example:
-----BEGIN CERTIFICATE-----
MIIDcTCCAlmgAwIBAAIBATANBgkqhkiG9w0BAQUFADB8MQwwCgYDVQQqEwNNRE0xFzAVBgNVBAMTD
mNlbnRvcy02LTQtYWRpMRIwEAYDVQQHEwlIb3BraW50b24xFjAUBgNVBAgTDU1hc3NhY2h1c2V0dH
MxCzAJBgNVBAYTAlVTMQwwCgYDVQQKEwNFTUMxDDAKBgNVBAsTA0FTRDAeFw0xNTEyMTkwNzI4MTV
aFw0yNTEyMTcwODI4MTVaMHwxDDAKBgNVBCoTA01ETTEXMBUGA1UEAxMOY2VudG9zLTYtNC1hZGkx
EjAQBgNVBAcTCUhvcGtpbnRvbjEWMBQGA1UECBMNTWFzc2FjaHVzZXR0czELMAkGA1UEBhMCVVMxD
DAKBgNVBAoTA0VNQzEMMAoGA1UECxMDQVNEMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQ
EA4SMybaAEZjfBX9wLglr3wxYHOvID5Pe1Z26Pv8oIR/
MTOVa1Bw4A9px1MHHSIfkAfgRlLC24uebZXhbh0snBq+OL+SJPwEfbOVbif/saXL8RJFwm/
VNg8KHUwjuq/sJkKDjx9uSf0U+/9FzwvKVuM87xDj/rVvJgBYh6pH34q/XD5l8am/iEQr/
EnGZmIsa+VkCL0IeYKbkA3ZINfI4YsjSJ+qeu5e/
KMsNlHEvmhk1DdJbLayn9QkiS5Q9e8A40jjkb2e1Q71awoOlb6+8XXWWkpBhxAnRa9P8Pb1BfcNyU
fXtrKuy+fRjw4Gp
+rw2MdoIDuMbO1+1sQaRvVPTYxwIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQCu1e/jBz63ZFDS
+pFvZ3XI/VMdn9NArM
+8Cjn7Luar8oEVAYI6jqYYcZCk2jQyfuI1HP2jXqPTJo8CNyNT5S6rZ5ryHOiRjn/
K2pVC6kT497LY5lc3LjhXUdLjpWnW2jsGfM93cCkkrxu8wmkh9oo8WizOiRAyKmz02uTEuEok7GJB
S/DR6csnLo2YLUV6ZqeBN9jdzZbIY7SoFWya1K4xZmqhkAtnj1ynP3uoxTkd
+wfDRmYeDv8l5eciLj2BXNuV8zXYWSCyABZC//jvajNtSEXgUura3uh0YBIfbO/
AZ980zUMwJBMBr06yw4tHnHRRYgfI3tnZOD4byaJOdHuq
-----END CERTIFICATE-----
/api/trustHostCertificate/{Mdm|Lia}
Example:
/api/trustHostCertificate/Mdm
Request:
Content-type: multipart/form-data
The body should contain a part named "file" and the file containing the certificate to be
trusted.
l Set the VxFlex OS Gateway to work with secured communication with the MDM (POST):
/api/gatewaySetSecureCommunication
The VxFlex OS Gateway will not be able to connect to the MDM using non-secured
communication.
l Working with the VxFlex OS Installer REST API
VxFlex OS Installer is an orchestration engine that runs commands from different queues
(nodes and MDM) at different "phases," and labels them as a "process" (such as an upgrade).
The VxFlex OS Installer REST API begins with the /im/ prefix.
Login to the VxFlex OS Installer REST API must be done via POST to the
j_spring_security_check url, followed by username, password, and the login submission. For
example:
POST https://localhost/j_spring_security_check
Content: "j_username=admin&j_password=Password1&submit=Login"
An operation such as the above will provide a JSESSIONID cookie in the response that should
be used for identification of the session in all the future requests.
For example:
"snmpIp": null,
"installationId": null,
/* list of IP addresses from which LIA will only accept connections—if null, accept any
connection */
"safeIPsForLia": null,
"mdmUser": null,
"mdmPassword": null,
"liaPassword": null,
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaLdapInitialMode": null,
"liaLdapInitialUri": null,
"liaLdapInitialBaseDn": null,
"liaLdapInitialGroup": null,
/* deprecated—not in use */
"licenseKey": null,
/* deprecated—not in use */
"licenseType": null,
/* boolean, indicating whether cluster should be set with high performance profile */
"isClusterOptimized": false,
"upgradeableByCurrentUser": false,
"ignoredSdcs": null,
"upgradeRunning": false,
"clustered": false,
"secondaryMdm": null,
"tb": null,
"pd2Sps": {
"domain1": ["pool1",
"pool2"]
},
"primaryMdm": null
"callHomeConfiguration": null,
"remoteSyslogConfiguration": null,
"systemVersionName": "",
/* security settings: set whether the process can connect to non-secured LIA/MDM and
whether VxFlex OS components should be configured to be non-secured. (They are
secured by default) */
"securityConfiguration": {
"allowNonSecureCommunicationWithMdm": false,
"allowNonSecureCommunicationWithLia": false,
"dontAutoApproveCertificates": false
"disableNonMgmtComponentsAuth": false
"virtualIps": ["10.76.60.203"],
"securityCommunicationEnabled": false,
"masterMdm": {
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.1"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"mdmIPs": ["10.76.60.195"],
"name": "MDM1",
"id": null,
"ipForActor": null,
"managementIPs": ["10.76.1.1"],
/* virtual interfaces to be used by virtual ip */
"virtIpIntfsList": ["eth1"]
},
"slaveMdmSet": [{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.3"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"mdmIPs": ["10.76.60.18"],
"name": "MDM3",
/* unique MDM ID */
"id": null,
"ipForActor": null,
"managementIPs": ["10.76.1.3"],
"virtIpIntfsList": ["eth1"]
},
{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.2"],
"domain": null,
"userName": "non_root(sudo)",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"mdmIPs": ["10.76.60.17"],
"name": "MDM2",
"id": null,
"ipForActor": null,
"managementIPs": ["10.76.1.2"],
"virtIpIntfsList": ["eth1"]
}],
"tbSet": [{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.60.33"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"mdmIPs": ["10.76.60.33"],
"name": "TB1",
"id": null,
"tbIPs": ["10.76.60.33"]
},
{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.60.19"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"mdmIPs": ["10.76.60.19"],
"name": "TB2",
"id": null,
"tbIPs": ["10.76.60.19"]
}],
"standbyMdmSet": [{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.5"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"mdmIPs": ["10.76.60.196"],
"name": "MDMa",
"id": null,
"ipForActor": null,
"managementIPs": ["10.76.1.5"],
"virtIpIntfsList": ["eth1"]
}],
"standbyTbSet": [{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.60.37"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"mdmIPs": ["10.76.60.37"],
"name": "Tba",
"id": null,
"tbIPs": ["10.76.60.37"]
"sdsList": [{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.3"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"sdsName": "SDS_10.76.1.3",
/* Protection Domain name (required) and ID (if it already exists, this is optional) */
"protectionDomain": "domain1",
"protectionDomainId": null,
"faultSet": "fs2",
"faultSetId": "0",
"allIPs": ["10.76.60.18"],
"sdsOnlyIPs": null,
"sdcOnlyIPs": null,
"devices": [{
"devicePath": "\\\\.\\PhysicalDrive1",
"storagePool": "pool1",
"deviceName": null,
"maxCapacityInKb": -1
},
{
"devicePath": "\\\\.\\PhysicalDrive2",
"storagePool": "pool2",
"deviceName": null,
"maxCapacityInKb": -1
}],
"rfCached": true,
"rfCachedPools": ["pool1"],
"rfCachedDevices": ["/dev/sdd"],
"rfCacheType": "MEDIA_TYPE_SSD",
"flashAccDevices": [],
"nvdimmAccDevices": [{
"devicePath": "/dev/dax0.1",
"storagePool": "accp2",
"deviceName": "/dev/dax0.1",
"maxCapacityInKb": -1
}],
"useRmCache": false,
"optimized": true,
"packageNumber": 0,
"optimizedNumOfIOBufs": 3,
"port": 7072,
"id": "0"
},
{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.2"],
"domain": null,
"userName": "non_root(sudo)",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"sdsName": "SDS_10.76.1.2",
"protectionDomain": "domain1",
"protectionDomainId": null,
"faultSet": "fs1",
"faultSetId": "0",
"allIPs": ["10.76.60.17"],
"sdsOnlyIPs": null,
"sdcOnlyIPs": null,
"devices": [{
"devicePath": "E:",
"storagePool": "pool2",
"deviceName": null,
"maxCapacityInKb": -1
}],
"rfCached": true,
"rfCachedPools": [],
"rfCachedDevices": ["/dev/sdd"],
"rfCacheType": "MEDIA_TYPE_SSD",
"flashAccDevices": [],
"nvdimmAccDevices": [{
"devicePath": "/dev/dax0.0",
"storagePool": "accp2",
"deviceName": "/dev/dax0.0",
"maxCapacityInKb": -1
}],
"useRmCache": false,
"optimized": true,
"packageNumber": 0,
"optimizedNumOfIOBufs": 3,
"port": 7072,
"id": "0"
},
{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.1"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"sdsName": "SDS_10.76.1.1",
"protectionDomain": "domain1",
"protectionDomainId": null,
"faultSet": "fs1",
"faultSetId": "0",
"allIPs": ["10.76.60.195"],
"sdsOnlyIPs": null,
"sdcOnlyIPs": null,
"devices": [{
"devicePath": "/dev/sdb",
"storagePool": "pool1",
"deviceName": "sdb_Device",
"maxCapacityInKb": -1
},
{
"devicePath": "/dev/sdc",
"storagePool": "pool1",
"deviceName": "sdc_Device",
"maxCapacityInKb": -1
}],
"rfCached": true,
"rfCachedPools": ["pool1"],
"rfCachedDevices": ["/dev/sdd"],
"rfCacheType": "MEDIA_TYPE_SSD",
"flashAccDevices": [],
"nvdimmAccDevices": [{
"devicePath": "/dev/dax0.0",
"storagePool": "accp2",
"deviceName": "/dev/dax0.0",
"maxCapacityInKb": -1
}],
"useRmCache": false,
"optimized": true,
"packageNumber": 0,
"optimizedNumOfIOBufs": 3,
"port": 7072,
"id": "0"
},
{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.60.33"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"sdsName": "SDS1",
"protectionDomain": "domain1",
"protectionDomainId": null,
"faultSet": "fs3",
"faultSetId": "0",
"allIPs": ["10.76.60.33"],
"sdsOnlyIPs": ["10.76.4.4",
"10.76.2.2"],
"sdcOnlyIPs": ["10.76.3.3"],
"devices": [{
"devicePath": "/dev/sdb",
"storagePool": "pool1",
"deviceName": null,
"maxCapacityInKb": -1
},
{
"devicePath": "/dev/sdc",
"storagePool": "pool2",
"deviceName": null,
"maxCapacityInKb": -1
}],
"rfCached": true,
"rfCachedPools": ["pool1"],
"rfCachedDevices": ["/dev/sdd"],
"rfCacheType": "MEDIA_TYPE_SSD",
"flashAccDevices": [],
"nvdimmAccDevices": [{
"devicePath": "/dev/dax0.1",
"storagePool": "accp2",
"deviceName": "/dev/dax0.1",
"maxCapacityInKb": -1
}],
"useRmCache": false,
"optimized": true,
"packageNumber": 0,
"optimizedNumOfIOBufs": 3,
"port": 7072,
"id": "0"
}],
"sdcList": [{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.6"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"guid": null,
"splitterRpaIp": null,
"sdcName": "SDCb",
"isOnESX": null,
"id": "0",
"optimized": true
},
{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.4"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"guid": null,
"splitterRpaIp": null,
"sdcName": "SDC3",
"isOnESX": null,
"id": "0",
"optimized": false
},
{
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.1.2"],
"domain": null,
"userName": "non_root(sudo)",
"password": "Password1",
"liaLdapUsername": null,
"liaLdapPassword": null,
"liaPassword": null
},
"nodeInfo": null,
"tunables": null,
"rollbackVersion": null,
"guid": null,
"splitterRpaIp": null,
"sdcName": null,
"isOnESX": null,
"id": "0",
"optimized": true
}],
"volumes": [{
"id": "0",
"name": "vol1",
"volumeSizeInKb": "8000000",
}],
"protectionDomains": [{
"name": "domain1",
"storagePools": [{
"name": "pool1",
"mediaType": "MEDIA_TYPE_HDD",
"externalAccelerationType": "EXTERNAL_ACCELERATION_TYPE_READ",
"dataLayout": "PERFORMANCE_OPTIMIZED",
"compressionMethod": null,
"spefAccPoolName": null,
"shouldApplyZeroPadding": true,
"writeAtomicitySize": null,
"overProvisioningFactor": null,
"maxCompressionRatio": null,
"perfProfile": null
},
{
"name": "pool2",
"mediaType": "MEDIA_TYPE_SSD",
"externalAccelerationType": "EXTERNAL_ACCELERATION_TYPE_WRITE",
"dataLayout": "SPACE_EFFICIENT",
"compressionMethod": "COMPRESSION_METHOD_NONE",
"spefAccPoolName": "accp2",
"shouldApplyZeroPadding": true,
"writeAtomicitySize": null,
"overProvisioningFactor": null,
"maxCompressionRatio": null,
"perfProfile": null
}],
"accelerationPools": [{
"name": "accp2",
"mediaType": "MEDIA_TYPE_NVDIMM",
"rfcache": null
}]
}],
}
n The Node object is used as an attribute of each VxFlex OS component (for example, on
which node the MDM is installed), and describes how the node will be accessed:
– via SSH (Linux node with root credentials)
– via WMI (Windows node with admin domain and credentials)
– via LIA (unknown node with null domain, null credentials and LIA password)
For example:
n LIA usage—Any post installation operations MUST be done via LIA. Such operations (for
example, upgrades) may fail unless all nodes are targeted to be performed via LIA.
"node": {
"ostype": "unknown",
"nodeName": null,
"nodeIPs": ["10.76.60.48"],
"domain": null,
"userName": null,
"password": null,
"liaPassword": null
},
"node": {
"ostype": "linux",
"nodeName": null,
"nodeIPs": ["10.76.60.41"],
"domain": null,
"userName": "root",
"password": "Password1",
"liaPassword": null
},
https://localhost/j_spring_security_check
Content: "j_username=admin&j_password=Password1&submit=Login"
POST
where the user name in the above example is "admin" and the password is "Password1".
JSESSIONID=969F624A761937AE80E6CC9E91756B10
3. Set the following headers, using the JSESSION ID obtained in the previous steps. For
example:
"Content-Type":"application/json"
"Cookie":"JSESSIONID=969F624A761937AE80E6CC9E91756B10"
Results
The JSESSION ID will be added to the URIs. You may now run any VxFlex OS Installer REST API
URI.
Figure 3 Example of VxFlex OS Installer REST API URI when JSESSION ID is configured as a header
Note: Any post installation operations MUST be performed using LIA. Operations such as
upgrades may fail unless all nodes have been targeted to be accessed via LIA.
To deploy the system, use the following work flow.
Procedure
1. Upload packages to the VxFlex OS Installer:
POST /im/types/InstallationPackages/instances/actions/uploadPackages
/im/types/Configuration/instances/actions/parseFromCSV
Headers:
l -u <GATEWAY_USERNAME>:<GATEWAY_PASSWORD>
l -F "file=@<FILENAME>", where <FILENAME> must contain the full path
For example:
https://254.222.10.123/im/types/Configuration/instances/actions/
parseFromCSV
This command returns the configuration that is used in the next step.
3. Start deployment using the JSON configuration (query phase starts automatically):
/im/types/Configuration/actions/install
Deployment consists of the following phases: query, upload, install and configure.
4. For each phase, check its progress to monitor that it is completed:
/im/types/Command/instances
/im/types/ProcessPhase/actions/moveToNextPhase
6. Repeat the two previous steps until the configuration phase is completed.
Configure VxFlex OS to use VMware's vSphere API for Storage Awareness (VASA).
Port usage
If you are using a firewall to protect your network, and need to enable ports and IP addresses,
refer to the VxFlex OS Security Configuration Guide. This guide contains information about port
usage for all components, and what to modify in case port numbers need to be changed.
The system analysis tool is invoked from the VxFlex OS Installer. The analysis checks the following:
l VxFlex OS components are up and running
l Ping between two relevant nodes in the system
l Connectivity within the VxFlex OS configuration (for example, connectivity between SDSs
within a Protection Domain, connection of SDCs with the cluster virtual IP address)
l Network configuration
l RAID controller and device configuration
Environment requirements and prerequisites
1. Supports servers running on the following operating systems:
l RHEL 6.x, 7.x
l SLES 12.x, SLES 15
l ESXi 6.0, 6.5, 6.7
2. All servers must have the following third-party tools installed on them:
l Netcat
l StorCLI or PercCLI
l smartctl
3. Requires a VxFlex OS Gateway server:
l On a Linux or Windows Gateway server; VxFlex OS is tested on RHEL 6.x and 7.x, on SLES
12.2, 12.3, 12.4, and 15, and on Windows Server 2012R2, 2016, and 2019.
l At least 1 GB of free disk space per node in the system.
4. A web browser that is supported by the VxFlex OS Installer.
5. Supports LSI RAID controller cards.
6. Supports IPv4/IPv6 network configuration.
7. Supports iproute2. Newer operating systems, such as RHEL 7.x, use iproute2 commands.
Best-practice recommendation
1. Deploy VxFlex OS.
2. Analyze the system to identify issues that should be fixed.
3. Fix any issues.
4. Analyze the system to verify that the issues have been fixed.
5. When the system meets your satisfaction, you can move it into production.
Limitations and compatibility
l On servers with MegaRAID, the RAID function analysis is not supported. Only StorCLI is
supported.
l RFcache analysis is not performed.
l NVDIMM analysis is not performed.
l Due to default Internet Explorer settings, to expand a report you may need to grant permission
for IE to run scripts. For more information, see the version release notes.
The report is saved as a ZIP archive in the default download location of the server on
which the report was run.
g. To make the VxFlex OS Installer available for subsequent operations, click Mark
operation completed.
3. Display the analysis report:
a. Extract the downloaded file.
The file includes the VxFlexOSSystemDiagnosisReport.html analysis file, and
several TGZ files (one for each node in the dumps folder).
b. Double-click VxFlexOSSystemDiagnosisReport.html.
Note: Use Internet Explorer 10 or higher to view the system analysis report.
Results
The System Diagnosis Summary report is displayed in the default web browser on your
computer.
When the analysis first opens, the major sections are shown in summary form. You can expand
them to show detailed diagnostic reports, as follows:
l VxFlex OS components:
This section of the report shows the non-running VxFlex OS server components, that is, SDS,
SDC, and MDM. Failure of these components may affect system performance and data
availability. Each of the VxFlex OS server components supports the following functionality:
n SDS server
The SDS (Storage Data Server) manages the capacity of a single server and acts as the
back-end for data access. The SDS is installed on all the servers contributing storage
devices to the VxFlex OS system. Failure of an SDS may affect the cluster performance and
data availability.
n SDC server
The SDC (Storage Data Client) server is installed on each server that needs access the
VxFlex OS storage and it is the gateway to the VxFlex OS storage. Failure of an SDC server
denies its application access to the VxFlex OS storage.
n MDM server
The MDM (Meta Data Manager) server controls and monitors the VxFlex OS system.
Failure of an MDM may affect the cluster performance and data availability.
l Network:
This section of the report checks the connectivity between various VxFlex OS components, as
well as the NIC configuration and performance. The network issues may impact the system
performance and data availability.
n Connectivity
Performing pings between VxFlex OS components leads to detecting and resolving
connectivity-related issues in the system. If the regular pings succeed, then the MTU pings,
followed by the Netcat pings are performed to the ports used by the VxFlex OS application.
If virtual IP addresses are assigned to the MDMs in the cluster, a logical host, called MDM
cluster is displayed (represented by a blue host icon) in the analysis report. The following
issues are tested:
– SDC connectivity with the virtual IP addresses.
You can show additional details of an error by clicking the ( ) icon. A pop-up window similar to the
following is displayed.
Diagnosis Summary
Collapse a list
Expand a list
The following table describes other symbols and interface elements used in the analysis report
display:
Error counters:
l Red - Error
l Orange - Warning
l Blue - Info
Note: An empty cell in the connectivity matrix indicates that no connectivity check was
performed. In such cases, no connectivity is expected.
l VxFlex OS Licensing.............................................................................................................176
VxFlex OS Licensing
VxFlex OS installations are enabled to be fully functional for non-production environments.
However, a license is required to use VxFlex OS in a production environment.
VxFlex OS licenses are purchased by total physical device capacity (in TB). You can activate your
licensed capacity over multiple VxFlex OS systems—each system with its unique installation ID.
For example, your purchase order may have been for 1000 TB. Your License Authorization Code
(LAC) will entitle you to activate all, or part of that. You can activate 500 TB for one VxFlex OS
system, and leave the rest for another activation, for the same, or for a different system.
To obtain a license for production use, and to receive technical support, open a service ticket with
EMC Support at https://support.emc.com.
When you purchase a license entitlement for VxFlex OS, an email containing the LAC is sent to you
or your purchasing department with a link to the Dell EMC Software Licensing Central website.
The LAC is needed to complete the entitlement activation process on the licensing site, which is
part of the online support portal located at http://support.emc.com.
If you cannot find the LAC email, you can use the Software Licensing Central website to find your
license entitlements.
Activate an entitlement and download the license file using the License
Authorization Code
Use this procedure to activate the entitlements and download the license file when you have
access to the License Authorization Code (LAC) email. The entitlements are the usage rights that
you have purchased for the specific host machine.
Before you begin
Ensure that the following are readily available.
l LAC email
l Sales order number
Procedure
1. Identify the VxFlex OS system's installation ID:
l To identify the ID using the VxFlex OS CLI, log in to the SCLI and run:
scli --query_license
l To identify the ID using the GUI, in the GUI main window click the menu next to the user
name (upper-right corner) and select About.
The installation ID is displayed in the About window. VxFlex OS
2. Click the link in the LAC email, and log in.
The link takes you directly to the Software Licensing Central page.
3. Enter the LAC and click Search.
An online wizard assists you with the entitlement activation process.
4. On the Select Products page, select the product to activate and then click Start the
Activation Process.
5. On the Company Details page, confirm (or update) your company's information and then
click Select a Machine.
6. On the Select a Machine page, select the machine on which to activate the product and
then click Save Machine & Continue.
Click Search to locate an existing machine (one on which EMC product was previously
activated), or add a new machine.
Note: In the context of the activatation process, a machine is a VxFlex OS system,
which can include multiple servers.
7. On the Enter Details page, enter the quantity of the entitlement in TB to activate on the
machine and the VxFlex OS installation ID identified previously.
To allocate the available capacity over multiple machines, select less than the full amount
available and repeat the activation process on the other machines.
8. Click Next.
9. On the Review page, review your selections.
Note: The license key will be emailed to the username logged in to the licensing system.
To send it to more recipients, click Email to more people and enter their email
addresses.
scli --query_license
l To identify the ID using the VxFlex OS GUI, in the GUI main window click the menu next
to the user name (upper-right corner) and select About.
The installation ID is displayed in the About window.
8. On the Company Details page, confirm (or update) your company's information and then
click Select a Machine.
9. On the Select a Machine page, select the machine on which to activate the product and
then click Save Machine & Continue.
Click Search to locate an existing machine (one on which a Dell EMC product was
previously activated), or add a new machine.
Note: In the context of the activatation process, a machine is a VxFlex OS system,
which can include multiple servers.
10. On the Enter Details page, enter the quantity of the entitlement in TB to activate on the
machine and the VxFlex OS installation ID identified previously.
To allocate the available capacity over multiple machines, select less than the full amount
available and repeat the activation process on the other machines.
Procedure
1. Locate the email containing the VxFlex OS license key.
2. Click the link in the email and download the license file to the Master MDM server.
The license key is invalid or The license key is invalid. Contact support.
does not match this version.
Contact Support.
The current system More capacity has been Reduce capacity, or extend
configuration exceeds the installed than the license the license capacity.
license entitlements. allows.
Operation could not be When you try to add an SDS Do not add the SDS or device,
completed. The license or device, it will cause the or extend the license
capacity has been exceeded. licensed capacity to be capacity.
exceeded.
The license key is too long The license file is larger than Check the accuracy of the
expected. license key.
The license has expired The duration of the license Extend the duration of the
has ended. license.
The issuer of the license you The license key is invalid. Contact support.
are attempting to add does
not match that of the product
scli --query_license
Note: Actual command syntax is operating-system dependent. For more information, see
the VxFlex OS CLI Reference Guide.
l To view license information from the VxFlex OS GUI:
1. From the Preferences menu (upper-right corner), select Systems Settings.
The System Settings window displays license and certificate information.
This section describes the next steps for using VxFlex OS following deployment.
What to do next
Once you have deployed VxFlex OS, proceed to the post-installation tasks in the Configure and
Customize VxFlex OS Guide, and perform mandatory and optional configuration tasks.