ALL Notes
ALL Notes
uses track table to record the track that has been changed.Key technology behind
SNAP/VP/CLONE/TIMEFINDER/SRDF
TF Clone suitable if
clones should be used for recovery scenarios.
multiple copies of production data is needed and you want to reduce disk content
ion and improve data access.
TF /SNAP
Only fraction of data changed on the production volumes
TF/VP
you want to create space efficent snap for thin devices.
VMAX 20K/40K -- TF/VP SNAP TIMEFINDER/CLONE/TIMEFINDER/SNAP
## TF VMAX 10K/VMAXe
TF Clone & TF VP SNAP (There is not SNAP availabile)
TF/Clone
No mirror position is required. can be source for srdf family.
copy r/w enabled, 4 main frame copies,
high workloads,data availability, 16 concurrent copies,can be raid 1-5-6,
protected establish & restore, incremental resync. 100% space is required for so
urce volumes.
TF/Snap :- (Not availabile for vmax 10k/vmaxe)
Does not required the mirror position.
supports moderate I/O workloads and functionality.
Data immediately availabile.
copy r/w enabled.
128 copies are there.
cannot be source for SRDF family. server mounting a snapshot have full read/writ
e capabilities with the snapshot.
symmetrics 40 k always create multi virutal snap copy.
TF VP/Snap :improved cache utiliztion.
no mirror position.
32 snap per source volumes
availabile at engenuity 5875 or higher.
in snap vp the tracks can be saved in the same thin pool or another.
TF Fundamentals Clone :volume copy
raid 1-5-6
immediately rw
16 copies of production volume
precopy option/
copy on first write are availabile.
support tf Mirror scripts.
TF Mirror emulation (Mirror commands are converted to snap commands.)
max. 8 differential session as 2 session can exist.
TF CLONE Operations.
1. create - creates the relationship b/w source and target.
2. Activate - clone is now active and availabile immediately after the r/w acces
s.
tated reached one copy cycle. then the A -> B Can be activated & once it
get into oopied state then B -> C can be activated.
with engenuity 5874 , the TF/Mirror operation will run in TF emulation mode when
ever TF/Mirror functionality is called.
####CLONE FROM CLONE
symdg show clone_a_to_b
clone target in symapi group
SYMAPI_ALLOW_DEV_IN_MULTIPLE_GRPS = ENABLE (this option has to be used when you
are trying for cascaded clone in /var/emc/config/options.)
##Creating the Clone sessions ###
hence the SYMAPI Option SYMAPI_ALLOW_DEV_IN_MULTIPLE_GRPS let the target to acts
as the source.
the devices A->B the stated once turned to copied state then only the activation
could be done unless it would get failed.
###Session states when both clone states
A->B created copied not in copyinprogress
B->C created copied copyinprogress
** Both session cannot be in copy in progress.
once the state A->B is copied the RTT restore to target can be perfromed from C>B.
##Recreate in cascaded environment :A->B in TF/CLONE
B->C in TF/VP or TF/Snap IN engenuity 5874 Q4 2012.
so the session A->B can be recreated which leads to incremental copy of contents
of A->B without affecting the existing session B->C.
#####Copying to larger targets:SYMCLI_CLONE_LARGER_TGT=ENABLED.
source to target copying is allowed but restored is not allowed which is useful
in migrations. differential copy is not allowed only full copy is allowed.
the concatanated meta devices is also not allowed for operations.
the striped meta devices could be used if the number source and target should ha
ve the same metamembers. also target metamember can be larger than source.
#####Timefinder/clone and VP operations
#####Timefinder/VP Snap :New features with engenuity 5876
Multiple target are possible for the source volume which lies in the same target
pool and share tracks.
all target must be bound to the same thin pool.
Both source device and clone target must be virtual device.
supported with FBA and AS400 D910 iseries device.
32 session can be created with same source device.
####Timefinder VP Snap How ti works :session are created as nocopy and nodifferential.
uses COFW and optimized as ACOFW.
the new writes on the source device triggers a copy to the save devices.
the reads from the target snap si redirected to source device & for the PIT the
save dev is reffered.
on activation the session turns into copy on write.
####Timefinder VP Snap How it works 2 :on activating the second session between the same source and different target.
case 1. in case there is no write on the protected track :- then the reads is pe
rformed from the same source.
case 2. if there is write happened.
the write occured after the activation of the first session.
the second session will get it own track means track got copied
from source to target.
in case the track is not modified after the first activation.
then the original copied to the thin pool -henc shared tracks
end
symdg show vpsnapdg| more
symclone create -vse dev001 sym ld TGT001 -nop
syclone create -vse dev002 sym ld TGT002 -nop (after creation the session state
was activated )
symquery -multi
symquery -multi
C - background copy. X bg, . nothing .v vse
g - grouped with target.
d- differential
p- precopy.
###Activating Timefinder snap session:- after creation the session chagned to co
pyonwrite.
symclone activate -consistent DEV001 sym ld TGT001 -nop
symclone activate -consistent DEV002 sym ld TGT002 -nop
symclone query -multi
###Display shared tracks :symcfg list -sid 12 -pool -thin detail
to show the shared tracks
symcfg list -sid 12 -pool -thin detail
PTESL ->
POOL - S - SNAP pOOL, R- RDFA_DSE , T=Thin
T -technology =S- SATA , F -FIBRE, E -EFD
E - Emulation --> F =FBA, A= AS400 , 8= CKD3380, 9 = CKD3390
Compression = e =enable, D =disable , N =enabling, S= Disabling.
S state = e enabled, d disabled, b balancing.
L ocation - disk internal ,external
##Dispaly shared tracks (2)
symcfg list -sid 12 -tdev -range 0243:0323 -bound -detail
symcfg -list -sid 12 -tdev -range 121:2323 -bound -detail
FLAG -ESPT
E emulation A = AS400, F=FBA, 8=CKD3380 9= CKD390
S shared tracks S - SHARED,
P persistent status completed one cycle or not
T status -- binding, boudn, allocating, compressing, unbound.
###Restore TF/VP SNAP
symclone restore -sid 12 DEV001 sym ld TGT002 -nop
this will create an new session and all the exisitng session will reamin as it i
s.
once the restoration got completed than the session can be terminated with -rest
ored option.
###Timefinder/VP Snap consideration :-vse session can only be specified at the time of session creation.
once the sesion created the mode cannot be changed.
once the target devices are managed by fast/vp then the relocation is not possib
le .
TIMEFINDER/SNAP and timefinder/vp snap cannot exist for the same source volume.
Timefinder/clone and Timefinder/snap can co exist on the same source volume.
Timefinder/clone cannot be created as no copy session.
also the target larger than source is not possible in tf/vp snap.
###Timefinder/VP Snap consideration 2 :cascading of Timefinder/VP is not allowed the target of the timefinder/vp cannot
be the source of another timefinder/vp session.
timefinder/vp session is allowed from the timefinder/clone target is allowed. ju
st the timefinder/clone state should either be split or copied.
timefinder/vp snap from an SRDF R2 device in consistent and active SRDF R2 is po
ssible just that the device level write pacing should be enabled.
both the R1 and R2 should be above 5876 engenuity.
######Restore operation in cascaded environment :restore operation in cascaded environment
A->B timefinder/clone
B->C timefinder/vp session
restore C->A via B (without terminating the session A->B AND B->C Sessions)
##Timefinder VP Snap Restore to Target :precondition
A->B state should be copied or split.
B->C state should be copied or copyonwrite.
1. sysnap restore the B <- C once copied/copy onwirte. (symsnap is used for TF/S
NAP not for TF/VP SNAP)
2. symclone restore the A <-B once copied/copy write.
terminate the operation once the successful restoraion is completed.
Timefinder Snap Restore to target :A -> B in copied/split B-> copied/copyonwrite
for TF/VP SNAP
symclone restore the B <- C
symclone restore the A < -B
Terminate the session once both are restored successfully.
###Restore operation in concurrent environment :=
A->B timefinder/clone
B->C timefinder/vp clone
restoration can be done A <-B
A->B timefinder/clone
B->C timefinder/snaap
restoration can be done A <-B
also the timefinder/clone and timefinder snap cannot exist together.
###Module 2 TF/CLONE and timefinder vp snap operation
symdg create rdmdg -nop
symclone -g rdmg establish -full -consistent -tgt -nop -v -sid 12
symclone rdmdg query
#####Module 2 Session 5:Device Groups:- all the device groups stays in symapi_db.bin or GNA if active ar
e availabile.
DG can be created/recreated/renamed.
Data Protection
timefinder/clone actions
create,recreate,terminate,activate,set mode,establish,restore.
###Module 3 timefinder/clone operations :Type 1. Normal Snap --> 16
Multivirtual Snap --> 128
Logical Point in time images the pointer are created on the virtual devices thes
e virutal devices than given access to host.
128 Snap session and are avaialbe immediately- Timefinder/clone and provid unmat
ched replication flexibility.
the default max. no. of snap is 16 if restore is planned the default goe
s to 15.
although in case of engenuity 5875 there are 2 sessions are stored for r
estoration so the max. possible of snap is 14.
the multivirtual snap can be enabled by Operating FLAG SYMCLI_MULTI_VIRT
UAL_SNAP = ENABLED.
###Timefinder /Snap Operations :symsnap command is used with normal snap operations although in case of
vp the symclone command is used with -vse option.
## timefinder snap :target is virutal device mapped to host. VDEV
copying only occurs when there are writes to source or target.
only original data which has changed is saved to save pool.
query can done using symsnap command.
timefinder/snap - copying data
timefinder/snap uses process called as copy on first write , so when the
host attempts to write the track on the source , the original data get for than
the first time the data get copied to the save pool , the exisitg track
remains same untill write is initiated on it for first time.
also the new write to the vdev save to save pool.
##Striping the save device.
copy on write is done for each of changed tracked happened.
the copy is done in striped from source to striped save devices.
tracks are striped in round robin manner to save devices to improve perf
ormance.
##Terminating a copy session:when the snap sessions are terminated the tracks are reclaimed by back a
nd the space is released.all copy structure are freed up.also the virtual device
s
made ready.
##Multiple save device Pools:symmetrix save pool are the special devices that provides the physical s
torages. the save pool allocation should be consider.
the write instesive application should have larger snap pool.
also long duration snap shot should have larger snap pools.
-svp pool option can be create action to specify which save pool to be u
sed.
##Symsnap operations.
create, activate, restore ,terminate,recreate (engenuity 5874 and SE 7.2
HIGER), establish.(Engenuity 5874 and SE 7.4 or Higher)
prior engenuity 5874 Create --> Activate --> Terminate
engenuity 5874 and SE 7.4 above recreate --> activate(incremental)
##Configuration considerations
there is some cache required for TF/Snap operations.
also the number of snap VDEV also considered.
vdevs (snapshots)
are persistent, cache only device, consume sym id device.
save dev should be spreaded across as many as physical devices possible.
savearea monitoring
savedev threshold can be set and put on monitoring.
savedev area fills then the sessions that required free space are put on
failed state.
save dev can be added dynamically.
Space Device Space Considerations :if write cannot be completed because of no space in save dev , then the
devices moves into not ready state.also the copy on write got disabled.
Draining save devices
permit a disable command to work on an active save devices.
all active tracks copied to other devices in the save pool.
disable the save devices which leads to draining the data to ther other
devices. also the disabling leads to the pool overflow and session got terminate
d.
disable dev 2323 in pool snappool ,type=snap
##Monitoring Save devices
symsnap -sid 20 monitor -svp appn_a_pool -i 5
Monitoring save devices.
symsnap monitor -sid 12 -percent 80 -action onepercentscript.sh -svp def
ault -i 60 -c 5.
symsnap moniotr -sid 12 -percent 80 -action onepercentscript.sh -svp def
ault -i 6- -c 5.
##TIMEFINDER/SNAP operations.
symdev list -vdev -sid 12
2E3:2E6 Production hosts 2E7:2EA backup host
symsnap list -pools
-svp appn_a_pool
device is associated with this group.
exist for this duplicates, there is more than one inactiv
no duplicates for this target.
incremental resotre to the BCV which got split but still holds the incre
mental reltaionship with the source device.
incremental restore with the device outside of the sessions for the full
restore has to be performed,
target of the source should be equal to greather than the source device,
and as the target is bcv so that the emulation mode will be used.
also during resotre all the exisitng session will be maintained. and in
engenuity 5876 there will be 1 restoration sessio will be created although in ca
se
of the engenuity 5875 there will be 2 session will be created for restor
ations.
##Restore a snap session
symsnap resotre -nop
symsnap query
when the symsnap restore command is issued then it make the source devic
es not ready for the short time, and when restore starts then the source device
becomes ready again althouhg the vdevs remains not ready. so they can be
made ready again by issuing this commands.\
symdev ready 2E7 -sid 20;symdev ready 2E8 -sid 20.
##symdev list -held
##symsnap query -multi
even after the restore operation , original snap session is maintained,
so in order to recreate the existing restore session. the restore session has to
be closed.
symsnap terminate -restore nop
symsnap query -multi
this will convert the existing VDEVs into read/write.
##Duplicate snap session
introduced with engenuity 5875 and SE 7.2
the original snap with the source must be created before the duplicate s
ession. once the the first snap session got activate the other duplicate
snapshot could be taken.
the duplicate snap session will persist even if the original snap sessio
n got terminated.
duplicates snap session allows snap of a dev. the snap session between t
he STD device and the first dev . the vdev can be used as the source for the nex
t
create snap although the also when the duplicate snap is activated that
will be activated against the original STD devices although PIT will be same for
both VDEV althought the time stamp will vary.
the original snap session will have 10am and the other 11am. both snap w
ill use the same save pool. once the original snap session got terminated the
duplicate will still persisits. also the max. two duplicate snap session
will persist. although the max. original session may exist upto the permissible
limit.
##Timefinder consideration with XtremeSW Cache (SW)
Creating Duplicate snap:symdg show duplicate_snap | more
symsnap create -svp appn_a_pool DEV001 vdev ld VDEV001 -nop
####Module 4 SRDF
operating system independent (Open system and Main frame)
srdf groups is the relationship between the local director port to the r
emote director port.any symmetrix device that is assigned as configured SRDF mus
t
added to the srdf groups for replications.
the static srdf groups are remains in the impl.bin file, also there is d
ynamic srdf group which is not written in the symapi_db.bin file even though it
is
persistent to the power cycle and IMPL.
the dynamic srdf is enabled by default on symmetrics VMAX arrays with en
genuity 5874
##Dynamic srdf groups
check both array have dynamic rdf configuration state enabled. enabled b
y default.
symcfg list (the num phys devices shows the number of physical devices a
re assigned to local host where command has ran.)
symcfg list -sid 20 -v
dynamic device pair leads to create srdf grups and dynamic device paris
enable one to create /delete the and swap srdf r1-r2 pairs.
##List avaialbe Remote Adapters and currently SRDF Groups
symcfg list -ra all -sid 20;
EFAULT_RDF_MODE= Syncronization
##Device pair Created:symrdf -f pair.txt query -sid 20 -rdfg 10
MDAE
M MODE SYNC,ASYNC, E=SEMI Sync c = adaptive copy.
D Domino x=enabled disbled.
consistency exemtt = x enabled . disabled m= mixed.
##List avaialbe RA and currently configured SRDF Groups
symcfg list -sid 12 -RA all
##Deleting Device pairings
the deleting the RDF pairs lead to removal of information from the symme
trix
must suspend RDF Links before issuing symrdf deletepair command, the sta
te should be suspended , split or failover
cancelling the srdf pair changes the status from r1/r2 to regular.
device in the device group changes from the RDF to RDF Capable.
symrdf suspend -sid 12 -f pair.txt -rdfg 5
symrdf deletepair -sid 12 -f pair.txt -rdfg 5
##Identify Accsible SRDF Volumes:syminq
symdev list -r1 (shows all the r1 devices configured on the hsot)
symdev list -r2 (shows all the r2 devices configured on the host)
##Symcli SRDF Device Groups
devices can be grouped into device groups.
all devices in a device must be in the same symmetrix array.
all devices must be of the same R1,R2,R21.
symdg create -type R1 srdfsg
set SYMCLI_DG=srdfsg
symdg add all dev range 2c7:289
all devices in the device group shouldbe on the same symmetrix array. th
e type of device group must be specified.
the type of device gropup r1,r2 then the only that type of device can be add
ed.
the device group definition is stored in the symapi_db.bin on the host w
here the symdg was created.
##Display symdg device groups
symdg show srdfsg| more
symdg show
##Displaying SYMCLI Device Groups
DEV001 DEV002
##symrdf commands syntax
symrdf -g <device group> <actions> [options]
actions : suspend resume establish terminate split activate failover fai
lback update restore set mode
##SRDF
symrdf
in the
ce pairs links
read/write.
Failover
failover -nop (executed from the r2 side)
failover scenario the r1 devices are write disabled and srdf devi
becomes the suspeneded and r2 will acts as the production and in
steps.
1. stop all applicaitons
2. unmount file system and unmount file system.
3. perform the failover operations.
4. resume the applicaiton from r2.
##Symrdf query
the r2 will be read/write and r1 will be write disabled. RDF Pair state
- failover over.
##Symrdf update -nop
as this can be performed in the failover state only hence in this state
the R1 is wd . so this command will update all the r1 write pending track to r1
after running this commands the srdf link status changed form NR to r/w.
the update command has to be perfomed before failback as the write pendi
ng ios will makde the r1 lags teh r2 which will reduced the performance.
the status changed to updateinprogress.
##Symrdf query
##SRDF Failback
Failback event should always be done gracfully.
the r1 will be r/w and r2 will be WD. also the SRDF Links will be SET TO
rw.
##SRDF decision support concurretn operations
SRDF Split - places the units in concurrent operations, suspend links b/
w r1 and r2. and r/w both volumes.
SRDF Establish - save source data, resume Normal SRDF operations, preser
ve data on r1 and discard the data on r2.
SRDF Restore - resume srdf operation presetve data on r2 and discard dat
a on r1.
SRDF Concurrent
##symrdf split -nop
symrdf query
##symrdf establish
if the establish command runned in split state, then the r1 device got c
opied to the r2 side discard the r2 chagnes.r2 device will be wd and r1 will be
r/w.
the links becomes the read/write.
##SRDF Restore
symrdf restore -nop
restore operation will resume SRDF
remote mirroring .changes made t
o target while in split state. changes made to the source are overwritten.the
r2 device will be write disable. links are resumed. r1 can be accessed a
gains without requiring synchhronization as that will be copied from the r2 side
.
##Query after restore:also restore resume the SRDF links.
##split, establish,restore -summary
production continues on the r1- r2 volumes for DSS operations.
##RDF R1/R2 Personality swap operations
symrdf swap
it changes the personality of the r1 devices to new r2 devices and r2 de
vices to new r1.
symrdf failover -establish
R1/R2 personality swap
symmetrix load balancing->
some times it is necessary
disaster recovery drills.
data center relocation
maintenace operation on hosts while continue produciton on dr site.
##Concurrnet SRDF device:an srdf device mirror with 2 srdf mirror is called as concurrent srdf de
vice.
R11 - each r1 mirror is paired with differnet r2 device on remtoe array.
R21 - this device is used as secondary site for cascaded srdf. in this c
ase the R2 is acting as Mirror for primary r1 site. and also acting as r1 source
for tertiary site.
R22 - each R2 mirror is paired with two different remote symmetrix array
. only one of the R2 mirror can be read write at a time.
it is used in SRDF star environment. which means it can recieve data fro
m just one R1 mirror at a time.
##concurrent SRDF R11 concurrent SRDF allows two remote SRDF mirror of single R1 device. each
pair will belong to differnt ra GROUPS. one copy for disaster recovery and anoth
er
for backup. also any combination of SRDF is allowed except for async and
async. although with engenuity 5875 and above the both legs can be async modes.
R1 <-> R2 site B in synchronous modes R1 <-> R2 site c asynchonouse mdoe
.
R1 <-> R2 site B in synchronous modes R1 <-> R2 site c adaptive copy mode.
R1 <-> R2 site B in synchronous modes R1 <-> R2 site c synchonouse mdoe.
##2 Synchronous remote mirrors:also a write from the primary site would not return as the write complte
d until the both symmetrix revert that IO is in their cache.
1 Sync and 1 Adaptive Copy Remote Mirror:the SRDF IO from the secondary device operating must show as the ending
status at symmetrics before a second host IO can be accepted.the adaptive copy m
ode
host doesn't need to be acknowledge.
simultaneously restore from r2 and r1 cannot be performed . SRDF swap is
provisiong to standard
Thin(5876 A4 2012)
vmax10k/20k/40k
vmax10k/20k/40k
vmax10k/20k/40k
vmax10k/20k/40k
Failback
-g srdf_rdm query
-g srdf_rdm failover -nop
-g srdf_rdm query
-g srdf_rdm failback -nop
-g srdf_rdm query
target host
the capture delta sets contain all the writes coming from the source hos
t. which is marked as N .
Transmit Delta set in the source symmetrix numbered N-1 contains data fr
om the immediately being sent as tranmit delta set.
also the N-1 at the recieved side is called as recieved delta set. the r
ecieve delta set is the process of receiving the data from the trasmit delta set
N-1.
the apply delta set numbered N-2 which is always consistent is going to
apply on the remote symmetrics array. data from the apply delta set is applied
to
the appropriate cache slot ready to destage on the disk. also the data i
n apply delta set is restartable and consistent .
the symmetrix performs the cycle switch once the N-1 set is compltely re
ceived and N-2 apply set is compltely applied and minimum cycle time has elapsed
. during the
cycle switch the N+1 becomes capture, N becomes transmit & recive , N-1
becomes the apply delta set.
GRP SAU
CONSISTENCY
- X - ENBLED . DISABLED N/A
STATUS
- A - Active , I = Inactive
RDFA MODE
- S - Single session , M - Multi session.
Mutisession clean up required - C clean up required.
Transmit idle T - X - Enabled , . =Disabled. -=N/A
D - X - A = active , . =Disabled.
A - Autostart = X =enabled, .=Disabled.
Write Pacing FLAGS
GRP Group- Level Pacing
S STATUS
: A= active , I= inactive.
A Autostart : X = enabled, .=Disabled.
U supported : X = Supported, . = Not supported.
Device level Pacing FLAG
S STATUS
: A= active , I= inactive.
A Autostart : X = enabled, .=Disabled.
U supported : X = Supported, . = Not supported.
FLAG for group-level and Device Level Pacing :Devs Paceable
P = X= all device are paceable , .= Not any device is paceable.
##List an individual RDF group:symrdf list -sid 20 -rdfg 10
SRDF
Adaptive copy modes
Posted on February 24, 2013
Here I share few informationa about Adaptive copy modes in SRDF. Basically Adapt
ive copy modes allow the primary and secondary volumes to be more than one I/O o
ut of synchronization.
There are two adaptive copying modes:
1) Adaptive copy write-pending (AW) mode
so link utilization is less. so write pending slots are merged into SRDF/A cy
cles , and incase there are no more writepending slots than the it more
additional two more cycles to before R2 is consistent.
##SYNC to SRDF/A(1) :symrdf query
any SRDF/A operation must be perfomed on all the device group . this means that
all the SRDF device must be part of the same srdf groups. this is in constrast
where operations can be performed on subset of devices in an srdf groups.
transition from synchornous to asynchornous.
##sync to SRDF/A :symrdf set mode async -nop
symrdf enable -nop
symrdf query -rdfa
Transision from ACP to asynchronous.
current state is sync in progress.
symrdf set mode async -nop
symrdf enable -nop
symrdf query -rdfa
the state is same the sync is in progess. after synchhronization the it would
take two more cycle for consistnecy of data at r2 side.
##symrdf query -rdfa
MDACE
M mode - S SYNC, A-ASYNC, E-SEMISYNC, C- ACP
Link D Domino - x enabled , . not enabled.
Adaptive copy - D - disk mode, w - writepending .- disabled
C - consistnecy - X - enabled .- disabled
E (exemtp consistnecy ) - X = enabled, .=Disabled , M= mixed. .=N/A
##SRDF/A Configuration Parameters:Maximum SRDF/A Cache Usage:the system wide parameters are used using the symconfigure commands although th
e group wise setting is done by symrdf command.
set symmetrix rdfa_cache_percent = 50
set symmetrix host_throttle_time = 2;
##SRDF Group Level Setting:Minimum Cycle Time
symrdf -sid 12 -rdfg 10 set rdfa -cycle_time 10
session priority
symrdf -sid 12 -rdfg 10 -rdfa set session_priority 30
session priority :- the priority range is 1 - 64 although the 1 being the highe
st priority (last to be dropped)
Minimum cycle time:- this the minimum time after which the srdf attempt to cycl
e switch .ranges from 1- 59 seconds.
minimum cycle time for SRDF is 3 seconds for MSC . also for engenuity 5875 and
above default value is 15 seconds.
##SRDF/A System configuration paramters
rdfa_cache_percent - defaults to 75 and ranges upto 100.
this is percentage of the Max# of the system write pending slots availabile to
SRDF/A. the purpose this is to allow other applciaiton
use the WP limits.
also as soon as the SRDF/A usage increased the WP cache limit , it will for th
e SRDF/A to drop the session to free up the cache.
setting it lower reserves WP limit for non SRDF/A cache usage. setting it high
er to allows the srdf/a to use more cache and it will may impact the
other application which may be using the cache.
rdfa_host_throttle_time :- defaults to 0 (0-65535)
if >0 , then this value override the rdfa_cache_percent and rdfa_host_throttle
_time both.
when the system WP limit is reached then the system delay the write on host un
till the session becomes freely avaialbe.
the each system has 75% of write pending slots limit.
the purpose of this limit is to ensure that the cache is not filled with these
slots with write pending tracks as there is no place
to put the I/O in cache.
SRDF/A creates WP tracks as part of each cycle.
##Monitoring the symstat command optionssymstat -type cycle -reptyep -rdfa rdfg all -i <interval>
symstat -type cycle -reptype -rdfa rdfa all -i <interval>
symstat -type cycle -reptype -rdfa rdfg 10 -i <interval>
symstat -type cache -rdfa -reptype rdfg all -i <interval>
symstat -type request -rdfa -reptype rdfga all -i <interval>
symevent
symevent list -error
##Monitoring SRDF/A
symstat -type cycle -reptype rdfa rdfg 10 -i 5;
FLAG TAS
T - Type
ASYNC
S STATUS
- 1=R1 , 2=R2
- Y = YES, N= No
- A = Active , I = inactive
Active
source
target
capture
apply
inactive
transmit
receive
SRDF/A DELTA SET EXTENSION :SRDF/A solves abnormal and temporarily problems
##when can dse help ?
SRDF/A DSE Solves abnormal and temporary problems
Unexpeted host load
Link bandwidth issues
temporarily link loss
Increases resiliency for SRDF/A:dse is not going to solve any permanent and persistent problems
##Listing Configured DSE pools:symcfg list -sid 20 -pools -rdfa_dse
symconfigure -sid 20 -cmd "create pool BC_DSE,type = rdfa_dse ;" commit
symmconfigure -sid 20 -cmd "disable dev 24B:24E in default_pool,type=snap;" c
ommit
symmconfigure -sid 20 -cmd "add dev 24B:24E to pool BC_DSE type=rdfa_dse, mem
ber_state=ENABLE;" commit
PTECSL
POOL
TECHNOLOGY
EMULATION
Compression
STATE
DISK LOCATION
##Set RDF Group Attributes and activates DSE
> symrdf -sid 20 -rdfg 10 set -rdf_dse autostart on -fba_pool bc_dse -both_si
des
##Set RDF Group Attributes and Activate DSE :##symrdf -sid 20 -rdfg 10 -rdfa_dse activate -both_sides
##symcfg list -sid 20 -rdfg 10 -rdfa
TDA
C Consistency
S State
R RDF Mode
M Multi session consistency
T Transmit idle
D delta set status
A Autostart
thr threshold reaches to 50% the cache data is staged to
##symcfg show -sid 20 -pool BC_DSE -rdfa_dse
##symcfg show -sid 20 -pool BC_DSE -rdfa_dse
##Query after Temporary Link Loss :at the time of capture the session state has been in tranmit idle for 35 seco
nds.
when the link is lossed then the transmit idle time is showing as 35 seconds
.
##Query after temporarily Link Loss :the RDF Pair state is transidle.
##DSE Pool Utilization -##SRDF/A Group Level Write Pacing :extend the availability by avoiding cache overflow .
it also monitors
> the R1 side IO
> R2 side restore rates
> also monitors the transmit rate and recived rates source and target side.
srdf write level pacing is avaialbe for engenuity 5874 and above.help securin
g the availability of an SRDF/A session by preventing condition that cause cache
overflow
so that host write is paced according to the SRDF IO RATE so that cache overf
low can be avoided. it prevents cache overflow at both end of r1 and r2.
also the SRDF/A
write pacing can also moniotr and respond to spikes in t
he host writes IO rates and slowdown in data transmittal to R2 and R2 restore ra
tes.
SRDF/A Device Level write pacing :the device level write pacing is new feature which is supported in engenuity
5875 and above with SE 7.2 on both sides. r1 and r2
in this the restore rates also called apply rates is
monitored the r2 side. when the write rate at the r1 site is compared at the
apply rate at the r2 site and the r1 rate is corrective action is taken.
both group and device level write pacing can be enable at the same time.
**Only those device will be included which of snap devices that are attached
with r2 device will only be paced.
##SRDF/A - Device Level Write Pacing
when the apply rates are no longer slower rate than the write rate then the p
acing would stop.
with engenuity 5876 and solution enabler 7.5 the device level & group level w
rite pacing can be set on r21 side.
R21 -> R2 Leg of cascaded SRDF configuration. group and device level write p
acing supported.
R1 -> R21 Leg must be in synchornous mode and R21 -> R2 must be asynchon
ous.
the r21 volume must be at engenuity 5876 Q4 2012 , the other 2 symmetrix
must be at minimum of 5875.135.91 for enhanced group and device level write pac
ing.
this allows TF/SNAP ,TF/VP TF/CLONE OF R2 DEVICE.
##Active group and device level write pacing :> symrdf -rdfg 10 -activate -rdfa_pace -nop
> symrdf -rdfg 1- -activate -rdfa_pace -nop
> symrdf -rdfg 10 -activate -rdfa_pace -nop
with solution enbler 7.4 both the group and device write pacing can be deacti
vated in single command.
# symrdf -sid 20o -rdfg 10 -activate -dp_autostart on -wp_autostart on -nop
##Group and Device-Level Write Pacing
as noted that r2 don't have consistent data after long time of link failueres
.as at this time there will be lot of write pending also there will be
so enabling just rdf link wouldn't be good at all. it is better to go for srd
f/a once both the sides are synchonized properly because of high write pending.
##Recovery Example:now as the link goes failed and the device status goes into paritioned state.
production work continues on the r1 sides.
#symrdf query -rdfa (loss of link placed the stauts in partitioned state.)
once the link recovered.
the session is still inactvie. the mode is asynchoronous.
when the links are active again the pair is moved to suspended state. even thou
gh the link is active.
symrdf query -rdfa
##symrdf disable -nop
##symrdf set mode acp_disk -nop
##symrdf query -rdfa
as the consistency is enabled so we have to disable the consistency before chang
ing the mode.
symrdf resume
symrdf set mode async
symrdf enable
##SRDF Session Recovery Tool:this is the SRDF session recovery utility is initiated by the symrecover comm
and.
runs in the back grouund using windows scheduled tasks and monitors the synch
onouse and asynchonous operations.
if the failure is detected , automatic recovery is initiated through the prec
onfigured file with gold copy paramters.
the symrecovery commands can be run from r1 or r2 side. but in case of concur
rent srdf it must be run from r1 side.
symrecover start -cg RDFAmon -mode async -options cg_mon_opts
this is to start the symrecover start form r1 host.
RDFAmon is the consistency group
options file - cg_mon_opts
##SRDF/A MSC Operations:there are three ways the RDF daemon can be started. if the RDF Deamon is enabl
ed by default the daemon is started by the solution enabler . it may take
bit of time to first connect it & builds its cache.
set SYMAPI_USE_RDFD = ENABLE
create a composite group -rdf_consistency options:Group Defintion is passed to the RDF Daemon as a condidate group.
if the Daemon is not already running, it is started automatically.
##create the consistnecy group.
symrdf -cg <Composite_group> set mode async
symrdf -cg enable.
##Managing RDF Daemon
prior to starting storrdfd ensure that the default SYMAPI Configuration data b
ase is up to date. storrdfd daemon use the database information to connect with
the remote arrays. that why the database information shouldb be correct.
# there are 3 ways the RDF daemon can be started first, if the RDF Daemon is e
nabled , then the solution Enabler will start the daemon automatically.
so during first time to connect it may take time to build its cache.
2. It can also be started by
stordaemon start storrdfd -wait 10
stordaemon install storrdfd -autostart
setting the stordaemon autostart makes its because the cache may take a time to
rebuilt the cache.- depending on the number rdg groups to be used.
stordaemon install storrdfd -autostart.
stordaemon start storrdfd -wait 10
stordaemon install storrdfd -auto
##SRDF/A with MSC ###
the composite group is created and rdf groups are added to this composite group.
the CG is enabled for multisession consistency.
##SRDF/A Consistency exempt feature:adding and removing device groups from active SRDF/A Group frequently.
> all devices currently in the SRDF/A session has to suspened in order to remove
it .
> if there is write to the current device in the session . those writes will bec
ome invalid tracks.
> after adding/removing devices they can be resumed which will be session active
again.
> during synchronization there will the status syncinprogress untill all invalid
tracks are cleared untill cycle switches occurs.
> if durng this time the R2 goes down the dr is inconsistent so DR exposure occu
rs.
##SRDF/A Consistency exempt feature:it is feature which required engenuity 5773.150 that allows devices to be exempt
ed from the dependent write consistency calculations.
requires engenuity 5773.150 and solution enabler 7.0
consistency exempt attributes is maintained at an SRDF mirror once set by user.
although it got cleared in following situation.
1. Deleting SRDF Pairs.
1. Moving SRDF Pairs.
3. Resuming SRDF Pairs. it got deleted once the sychronizationgot completed .the
re is no invlaid tracks left and after the second cycle swithcing.
the r2 will report the r1 is not availbe. the attribute cannot be removed & it i
s only to be removed by the engenuity. also there is no cli
command to remove it. with this features the addition of the devices could be do
ne without suspending the link so the consistnecy algorithm won't
be applied on the newly added device as the consistency expet attribute is set o
n it. it can only be removed once the device synchoronizedafter
two cycle switches.
##Moving the Device will clear the Consistency Exempt indicator from the SRDF Mi
rror in the group. -cons_exempt flag with move pair operation.
then the consistency indicator will be set when the device is moved into new srd
f group.
##SRDF Operations allowed with Consistency Exempt:operations such as establish resume suspend can be performed on this. also the s
plit and failover cannot be perfomred on subset devices.
-consistency exempt
can be used on device file, device group , consistency group as well.
symrdf createpair -cons_exempt :- this will create both the pairs in consistency
exemtp.
symrdf movepair -cons_exempt :- this will create both the pairs in target group
as consistency exempt.
symrdf suspend -cons_exempt :- this will enable the consistency exempt on the cu
rrent SRDF pair.
##Adding devices to the active srdf session:-
1. create a new device pair into a temporarily SRDF group and synchonized them.
2. synchorization is done with -establish option.
3. suspend the pair.
4. move the pair from the srdf group to the new .
5. resume the pairs. once the
wait for the srdf pair consistency exempt to consistent.
##Removing Devices from an Active SRDF/A session:solution enabler 7.0 and engenuity 5874
1. suspend the relevant device pairs(s) in the current SRDF/A session.
this requires -cons_exempt flag.
if the consistency is enabled for the srdf group then the -force option may be r
equire for suspend operation to suceed.
2. verify the device are suspended and consistency feature is set for them.
3. move the pairs to diffent rdf group.
move pair also cons_exempt only if device are being moved to srdf/a group.
##Query the existing SRDF group:> symquery query -rdfa
> symrdf query -rdfa
MDACE - E stands for consistency_exempt.
X- enable, . -Disabled
## Create New Pair
symrdf addgrp -lable temp -sid 12 -remote_sid 20 -dir 09F,10F -remote_dir 09F,10
F -rdfg 11 -remote_rdfg 11
symrdf createpair -sid 20 -type r1 -establish -f pairs.txt
symrdf -sid 20 -rdfg 11 query -f pair2.txt
now move the devic pair.
symrdf -sid 20 -rdfg 11 -f pair.txt suspend -nop
symrdf movepair -rdfg 11 -new_rdfg 10 -f pair.txt -cons_exempt
##query the srdf/a pair.
symrdf query -rdfa
##Verify consistency :symrdf -sid 20 -rdfg 10 resume -f pair.txt -nop
symrdf query -rdfa
##Open Replicator
the open replicator copies PIT of local symmetrix copies volumes and transfer th
em from one to another storage .Only PIT is transferable.
once PIT is transfered.also the open replicator offer live and incremental migra
tions. also during migration you don't have to wait to data
to get copied.
-uses san/wan to make copies.
-full/incremental copies.
Cold
wwn of remote.
wwn=3wr23847238salkdfja;sjd
remote
wwn=kja;sdlfja;sdkf;as
##Incremental Push
symrcopy create -differential
symrcopy activates
symrcopy verify -copied
symrcopy recreate
symrcopy activate
initially the differentially session has to created.
##Symmetrix Differential Data Facility:each symmetrix logical volume can support up to 16 SDDF.
SDDF session comprise bitmap that flip a bit for every track that changes the si
nce the session was iniated.
SDDF session are used to monitor changes in
-clones
-snaps
-BCV
-Change Tracker
-Open Replicator
##Incremental Push Details
> upon creation of session two bitmap sets up
Protection Bitmap :- 1111111111111111111111111
SDDF Bitmap :0000000000000000000000000
after copy:Protection Bitmap :- 0000000000000000000000000
SDDF Bitmap :0101011100000000000000000
there is the protection bitmap which represetn which of the tracks has been copi
##HOT pull (2):upon loss of i/o the application got impacted . however the session persists and
once the link comes back the copy proceeds.
CLI Examples:symrcopy -file <filename> -pull -hot create
symrcopy -file <filename> activate
symrcopy -file <filename> terminate
CLI Format
control
symdev=vmaxid:symdev
remote
wwn=askdjf;asdfalsfka;lsdfk
remote
wwn=iqwuepfsdlfasdf;fd'aks;dfas
state control/remote
copy type
FA Access
R/W , remote NHDC
1:1
Session fails.
the session fails only at Hot push location becuase you don't want them to wait
for the produciton volume to respond.
##Open Replicator symmetrix operational Details.
Zoning: HOT push/pull
in this situation every FA which has access to control device must have acess to
FA of corresponding remote device so that all the read/writes
can be push/pull from the remote volume.
##Zone and Mask 2 Symmetrix arrays:identify FA with access to control devices.
get wwn of FA ports.
#on remote symmetrics
identify FA Port number.
mask the wwn of the controlling symmetrics FA ports to the remote FA Ports.
create zone b/w controlling and remote FA of your choice.
symask - if dmx.
symaccess - if symmetrix vmax array.
* Masking has to be done at remote array.
##symaccess list view -sid 12
##symaccess -sid 33 show ors_20_ig -type initiator
on the remote storage the masking has tob e done. in the initiator groups there
will be WWN of the control Symmetrix array. so that
the control FA will acts as inititor for these symmetrix and remote will act as
the target.
##symmaccess for port group on remote array.
symaccess -sid 33 show esx163_pg -type port
port groups contains the port number of remote arrays.
##symaccess command viewing storage group:symaccess -sid 33 show ors_33_sg -type storage
contains the volumes.
##SYMCLI to perform ORS operations
SYMCLI_RCOPY_COPY_MODE Variable environment
copy_diff = set background copy mode. when the session got activated it transiti
oned into copyinprog.
s
sets default mode for create as diffential ,allowing for
subsequent recreate.
do not use with offline or online pull.
NOCOPY_DIFF:- no copy in background. and copy occurs differentially and change t
he status to copyonaccess.
do not use with offline or online pull.
copy_nodiff:- sets background copy mode. when session is active , it transitione
d into 'copyinprog'.
nocopy_nodiff:- doesnot set the background copy mode; the recreate will failed.
nocopy_nodiff:- so this will copy on access.
precopy_diff:- sets as precopy state , recreated ..also only be used for hot pus
h.
precopy_nodiff:- sets precopy mode ,session is in precopy mode.
session is not diffential at create time ,recreate fails.
use only for hot push.
##SYMAPI_RCOPY_GET_MODIFIED_TRACKS Options
option file variable effect all session.
##Device to use for ORS transfer. sid 20 is the control array, and sid 33 is rem
ote array. these device are also added in maksing at
the remtoe array.
symdev list -sid 20 pd
##symsan command
list port and LUN wwns seems from the particular ports and directorl.
so can validate zoning b/w the port and intended OR target.
does not require the OR session to be created.
Examples
##Display remote ports WWNs.
X - incomplete.
X- record is controller,. record is not contorller.
X - record is reserved, record is not reserved.
A = AS400 ,F =FBA, C =CKD, .
THIN DEVICE
=SYMMETRIX DEVICE.
##Create
symrcopy
symrcopy
symrcopy
remote
clardev=askldjf;aksdf;laks
if you do not discovered the using the SYMCLI the devices has to discovered will
show APMA98324098W7 although it is best
to discover the clariion devices. although it is not recommended to use wwn as i
t may cause error.
symcfg list -authorization
symcfg add authorization -host <ip address> -username <username> -password <adsf
as;df>
symcfg add authorization -host <ip address> -username <username> -password <adsf
as;df>
SPB
##Sizing remote array volumes:Transfering data b/w DMX and non-symmetrix is not different than vmax to vmax.
as the e lab navigator have the details for the storage
that are qualified for the dta transfer.the principle challenge is to find the w
wns of the device on the remote array.Once the wwns are knowsn
the open replicator can be created over san.
while symmetrix array measure configured to measure the size of their lun in cyl
inder while the other array use the byte blocks. so
care must be taken if there is bidirection data transfer is planned.
when the remote is smaller than the source, then data cannot be copied without s
pecifying the -force_copy flag.
however the pull works because the extra track are simply left untouch.
>> when the remote device is bigger than the source device. the pull operation c
annot be perfomred unless specifed with -force_copy . showever
push works because extra on the remtoe devic e is simply untouched.
##Open Replicator and thin devices.
thin devices can be used as control or remote devices.
also thin to standard repliction can also be perfomed using the open replicator.
##Federated Live Migraiton
non disruptive migration approach using array based migration of data using ORS
and host based migration rediretion using Powerpath .
it does this by using a set of coordinated command through EMC symmetrix managem
ent console coordinate the array migration and coordinate the
host application redirection from one central point making the migraiotn truely
non disruptive.
addionaliy FLM is flexible which hlep to migrate thin-thin thin-thinck thick -th
in also the host level redirection usin powerpath help to
eliminate time conssuming remediate.
federated live mgiration eng. 5671,5773,5875 and powerpath 4.5
system.
##Underlying TECHNOLOGY :FLM operating the new symmetrix as the having the new VMAX device assume the ide
ntity and geometry
FLM Terminology:uses mainly hot pull so that
control device
FLM Target
Remote device
FLM Source (Donor)
Host Access Mode:New in Engenuity 5875
Active or Passive
Device External Identify:the FLM symmetrix array present a unique identity are visible host symmetrix Log
ical volume . the identity is made up WWN front end director.
and device geometry.the spoofed identity can be recognized the director ports ar
e spoofed by 2.
e.g. 7E:2 instead 7E:0
##Migration Consideration:Old Zone - connectivity b/w the dmx to application host.
new zone 1 - connectivity b/w the VMAX to applciaiton host.
new zone 2 - connectivity b/w the VMAX to old dmx.
solution enabler 7.2 on control host. engenuity 5875 with ACLX on new VMAX and E
ngenuity ePack on the Old demx required.
donor dmx device should not be part of local or remote replication.
max. 32 pairs at a time.
san view equivalent to symsan