An "Apples To Apples" Comparison: June, 2016
An "Apples To Apples" Comparison: June, 2016
COMPARISON
June, 2016
EMC CONFIDENTIAL: FOR INTERNAL
USE ONLY. DO NOT DISTRIBUTE
3PAR simply cannot meet the demands of enterprise iCDM and falls short in 3 key areas:
performance, flexibility and space reclamation:
• Performance
– Snapshot creation degrades production performance by -40%
– Volume performance drops to ZERO for several seconds while creating snapshots
– Performance is degraded indefinitely when snapshots are created with any frequency
– Recovering to pre-snapshot performance takes hours once snapshot creation has ceased
• Flexibility
– Restoring a parent volume from a snapshot results in extended downtime
– Refresh capabilities are extremely limited and pertain only to child objects
• Space Reclamation
– Dirty space constantly accumulates due to slow GC which causes the array to go offline. For successful
testing, utilization was dialed back to meager 25% of usable capacity!
– Tasks of capacity management and reconciliation are nearly impossible with array reporting negative space
savings of > -200% rendering deduplication virtually unusable
Full performance ✓ ✗
copies No loss in PROD
performance -40% performance loss
Efficient ✓ ✗
retention/space Delete copies and reclaim Space reclamation requires
management space after retention period deleting all DEV/QA copies !
✓ ✗
Flexible workflow “Catch up” DEV/QA with Not allowed. Only leaf node
PROD – any depth allowed can be refreshed from parent
EMC CONFIDENTIAL: FOR INTERNAL USE ONLY. DO NOT DISTRIBUTE
Internal©Use - Confidential
Copyright 2016 EMC Corporation. All rights reserved. 3
3PAR VOLUMES: TPVV VS TDVV
• TPVV – Thin provisioned virtual volume
– Space efficiency provided via thin provisioning only
– No data reduction capabilities
– Snapshot implementation, COW (copy on write)
• 4 controllers (nodes)
• 16 x 16Gb FC Ports (4 per
controller)
• 4 shelves
– 2 x DPE (controllers & SSDs)
– 2 X DAE (SSDs only)
• 48 x SSDs (12 per shelf)
Purpose:
• To establish a performance baseline for volumes when no
snapshots exist on the array
• Observe and document the performance impact associated with
creating snapshots
• Observe and document the duration required for recovering to
the pre-snapshot performance baseline
• Run 12 hour steady state workload (50R/50W, mixed block sizes, 100% random)
• Snapshot schedule:
– No snaps during the first hour of testing to establish baseline
– At 1 hour mark, begin creating snapshots every 15min for a duration of 3:45
(hh:mm)
– Disable snapshot schedules for duration of testing - 7:15 (hh:mm)
Purpose:
• To observe and document the performance characteristics of
the array when both volumes and snapshots of those volumes
are mounted
• To observe the impact of creating snapshots of both the
volumes and mounted snapshots
• 4 controllers
• 8 x 8Gb FC Ports
(2 per controller)
• 2 shelves
– 2 X DAE (SSDs only)
• 50 x SSDs (25 per shelf)
XtremIO:
Mean IOPS 144,646
HP 3Par 8440:
Mean IOPS 83,711
(Drop of -33% compared to normal steady state)
INCONSISTENT PERFORMANCE !
XtremIO:
Mean Latency 3.54ms
StdDev 0.0695
HP 3Par 8440:
Mean Latency 6.19ms
StdDev 0.822
INCONSISTENT PERFORMANCE !
Purpose:
• To compare and contrast the front end bandwidth requested by
the hosts versus the resulting backend bandwidth on the
physical SSDs when writing unique data
• To compare and contrast the front end bandwidth requested by
the hosts versus the resulting backend bandwidth on the
physical SSDs when writing 4:1 dedupe data
WRITE
TOTAL
@ ~1.1GB/s
DEDUPE
CHAOS
3PAR writing
to SSDs @
~2.5GB/s
WRITE
TOTAL
Create
Create first
first DEDUPE
snapshot
snapshot CHAOS
-20% drop
in BW
3PAR writing
to SSDs @
~2.0GB/s
Purpose:
• To establish a performance baseline when IO is running to both
volumes and their mounted snapshots
• Observe and document the performance impact when creating
additional snapshots for:
– Volumes only
– Snapshots only
– Both volumes & snapshots simultaneously
-40%
Scenario
1. Create and fill 16 volumes READ
TOTAL
3. Mount volumes and snapshots
4. Run steady workload
5. Create CGs
6. Schedule snapshots of CG1 and CG3
CG1: TPVV.0 – TPVV.7, CG2: TPVV.0.S1 – TPVV.7.S1
CG2: TPVV.0.S1 – TPVV.7.S1, CG2: TPVV.0.S1 – TPVV.7.S1
CG Snapshots (8 vols/CG)
1. Every 15 minutes
2. 30 second pause between CG1 and CG3 schedules
EMC CONFIDENTIAL: FOR INTERNAL USE ONLY. DO NOT DISTRIBUTE
Internal©Use - Confidential
Copyright 2016 EMC Corporation. All rights reserved. 31
3PAR 8440 SNAPS: USE THE RIGHT UI
SSMC Screenshot
When viewing
performance charts
generated by SSMC, we
see nice smoothed lines
which give the impression
of semi-consistent
performance.
Performance
drop to zero! READ
WRITE
TOTAL
WRITE
TOTAL
Write IOPs drop from ~70K to ZERO (~10s) ZERO IOPs (~7s) Recover to ~60K (~7s)
Read IOPs drop from ~70K to ZERO (~10s) ZERO IOPs (~8s) Recover to ~60K (~8s)
Scale 10 seconds
EMC CONFIDENTIAL: FOR INTERNAL USE ONLY. DO NOT DISTRIBUTE
Internal©Use - Confidential
Copyright 2016 EMC Corporation. All rights reserved. 33
HP 3PAR VIRTUAL COPIES
AGILITY: RESTORE & REFRESH
P INSTANT !
BACKUP
COPY
S2 TEST
COPY 2
QA COPY (READ ONLY)
(READ ONLY)
S1
Refresh
supported
S1.1
TEST
DEVELOPMENT
COPY
S1 COPY 1
(READ ONLY)
(READ/WRITE)
S2 TEST
COPY 2
(READ ONLY)
S2 TEST
COPY 2
QA COPY
(READ ONLY)
(READ ONLY)
Monitor progress
• Online refers to the array status of the volume ONLY and ONLY
means that the promote operation can be executed for volumes
that have exported VLUNs
• During online promote the host file system must be un-
mounted or offline. Failure to adhere to this action will result in
data corruption and will require a second restore of the data.
• Results in extended downtime since volume is offline for the
duration of the promote operation