VPLEX Architecture
VPLEX Architecture
VPLEX is deployed as a cluster consisting of one, two, or four engines (each containing two directors), and a
management server. A dual-engine or quad-engine cluster also contains a pair of Fibre Channel switches for
communication between directors. Each engine is protected by a standby power supply (SPS), and each Fibre
Channel switch gets its power through an uninterruptible power supply (UPS). (In a dual-engine or quad-
engine cluster, the management server also gets power from a UPS.) The management server has a public
Ethernet port, which provides cluster management services when connected to the customer network. This
server provides the management interfaces to VPLEX — hosting the VPLEX web server process that serves
the VPLEX GUI and REST-based web services interface, as well as the command line interface (CLI) service.
In the VPLEX Metro and VPLEX Geo configurations, the VPLEX Management Consoles of each cluster are
inter-connected using a virtual private network (VPN) that allows for remote cluster management from a local
VPLEX Management Console. When the system is deployed with a VPLEX Witness, the VPN is extended to
include the Witness as well.
availability.
VPLEX Geo - two VPLEX clusters located within or across multiple data centers separated by up to
50 ms of RTT latency. VPLEX Geo uses an IP-based protocol over the Ethernet WAN COM link that
connects the two VPLEX clusters. This protocol is built on top of the UDP Data Transfer (UDT)
protocol, which runs on top of the Layer 3 User Datagram Protocol
(UDP).
VPLEX Metro and VPLEX Geo systems optionally include a Witness. The Witness is implemented as a
virtual machine and is deployed in a separate fault domain from two VPLEX clusters. The Witness is used to
improve application availability in the presence of site failures and inter-cluster communication loss.
The VPLEX operating system is GeoSynchrony (currently version 5.3) designed for highly
available, robust operation in geographically distributed environments.
VPLEX Geo uses write-back caching to achieve data durability without requiring synchronous operation. In
this cache mode VPLEX accepts host writes into cache and places a protection copy in the memory of another
local director before acknowledging the data to the host. The data is then sent asynchronously to the back-end
storage arrays. Cache vaulting logic within VPLEX stores any unwritten cache data onto local SSD storage in
the event of a power failure.
The individual memory systems of each VPLEX director are combined to form the VPLEX distributed cache.
Data structures within these memories in combination with distributed algorithms achieve the coherency and
consistency guarantees provided by VPLEX virtual storage. This guarantee ensures that the I/O behavior
observed by hosts accessing VPLEX storage is consistent with the behavior of a traditional disk. Any director
within a cluster is able to service an I/O request for a virtual volume served by that cluster. Each director
within a cluster is exposed to the same set of physical storage volumes from the back-end arrays and has the
same virtual-to-physical storage mapping metadata for its volumes. The distributed design extends across
VPLEX Metro and Geo systems to provided cache coherency and consistency for the global system. This
ensures that a host accesses
to a distributed volume always receive the most recent consistent data for that
volume.