Building A HA and DR Solution Using AlwaysON SQL FCIs and AGs v1
Building A HA and DR Solution Using AlwaysON SQL FCIs and AGs v1
Summary: SQL Server 2012 AlwaysOn Failover Cluster Instances (FCI) and AlwaysOn
Availability Groups provide a comprehensive high availability and disaster recovery solution.
Prior to SQL Server 2012, many customers used FCIs to provide local high availability within a
data center and database mirroring for disaster recovery to a remote data center. With SQL
Server 2012, this design pattern can be replaced with an architecture that uses FCIs for high
availability and availability groups for disaster recovery business requirements. Availability
groups leverage Windows Server Failover Clustering (WSFC) functionality and enable multiple
features not available in database mirroring. This paper details the key topology requirements of
this specific design pattern, including asymmetric storage considerations, quorum model
selection, quorum votes, steps required to build the environment, and a workflow illustrating how
to handle a disaster recovery event in the new topology across participating job roles.
Copyright
This document is provided as-is. Information and views expressed in this
document, including URL and other Internet Web site references, may change
without notice. You bear the risk of using it.
Some examples depicted herein are provided for illustration only and are fictitious.
No real association or connection is intended or should be inferred.
This document does not provide you with any legal rights to any intellectual
property in any Microsoft product. You may copy and use this document for your
internal, reference purposes.
2012 Microsoft. All rights reserved.
Contents
Introduction................................................................................................................ 4
Failover Cluster Instances for local HA and Database Mirroring for DR.......................4
Failover Cluster Instances for local HA and Availability Groups for DR.......................5
Planning and Considerations...................................................................................... 7
Windows Server Failover Cluster Requirements......................................................7
Asymmetric Storage................................................................................................ 7
Instance Naming and File Path................................................................................8
Availability Mode and Failover Mode........................................................................8
Quorum Voting........................................................................................................ 8
Quorum Configuration............................................................................................. 9
Tools to View/Change Quorum Model and Node Votes.......................................11
Configuring the WSFC Quorum Model................................................................11
Using DMVs and AlwaysOn Dashboard to view Quorum Information.................11
Configuring Node Votes...................................................................................... 13
Client Connectivity ............................................................................................... 13
Read-Write Workloads........................................................................................ 14
Read-Only Workloads......................................................................................... 14
Configuring the FCI+AG Solution..............................................................................15
Installing Prerequisites.......................................................................................... 15
Setting up the Solution at the Primary Data Center..............................................15
Setting up the Solution at the DR Data Center......................................................20
Monitoring Considerations........................................................................................ 24
Recovering from a Disaster...................................................................................... 25
Reverting Back to the Primary Data Center..............................................................31
Conclusion................................................................................................................ 35
References................................................................................................................ 36
Introduction
Microsoft SQL Server 2012 AlwaysOn provides flexible design choices for selecting
an appropriate high availability (HA) and disaster recovery (DR) solution for your
application. For more information about SQL Server 2012 AlwaysOn high availability
and disaster recovery design patterns, see SQL Server 2012 AlwaysOn High
Availability and Disaster Recovery Design Patterns.
This white paper describes the solution using failover cluster instances (FCI) for HA
and using availability groups (AG) for DR. This architecture combines a shared
storage solution (FCI) and a non-shared storage solution (AG).
Prior to SQL Server 2012, a common HA and DR deployment architecture involved
the use of FCIs for local high availability and database mirroring (DBM) for remote
disaster recovery. With SQL Server 2012, availability groups can replace the
database mirroring component of the solution.
This paper covers planning considerations and walks through the steps required to
build this solution. It also covers the steps required to recover from a disaster, and it
explains how to revert back to the primary data center after the primary data center
is restored.
This paper assumes a basic knowledge of failover cluster instances (FCIs),
availability groups, high availability, and disaster recovery concepts. For more
information about the full AlwaysOn solution feature set, see the Microsoft SQL
Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery white
paper. For more information about migration steps, see the Migration Guide:
Migrating to SQL Server 2012 Failover Clustering and Availability Groups from Prior
Clustering and Mirroring Deployments white paper.
The target audience for this white paper includes operational SQL Server database
administrators and technology architects. It is also appropriate for system
administrators who collaborate with the database administrator role for
management of Windows Server, Active Directory Domain Services (AD DS), WSFC,
and networking.
the nodes, another node can take over as the host of the FCI within the same data
center.
Database mirroring is used between the primary site and the disaster recovery site
to provide database-level protection. In the event of a primary data center outage,
or if the shared storage in the primary data center experiences a failure, the mirror
in the DR data center can be used to restore service to the applications. The
disaster recovery data center hosts another FCI on a separate WSFC, with its own
shared storage. Figure 1 provides a representation of this solution architecture.
Fi
gure 1: FCI for high availability and database mirroring for disaster recovery
Typically, the DR data center is located at a distance from the primary data center,
and the mirroring session is set to high performance asynchronous mode in order
to minimize the overhead to transactions. Occasionally, synchronous database
mirroring between the data centers is also observed.
For more information, including a practical example of this specific solution, see
High Availability and Disaster Recovery at ServiceU: A SQL Server 2008 Technical
Case Study.
Figure 2: FCIs for high availability and availability groups for disaster recovery
Figure 2 shows two FCIs, one in the primary data center and another in the disaster
recovery data center. Each FCI has two nodes and its own shared storage. All four
nodes, however, are part of the same WSFC. That all nodes belong to the same
WSFC is a requirement for availability groups.
Figure 2 illustrates a simple scenario topology with two data centers, each hosting
one replica of the AG on a two-node FCI. The architecture allows for variations to
this topology:
The discussion in this white paper focuses on the topology shown in Figure 2;
however, the general concepts apply to the other variations as well.
Because the four nodes across two sites are part of the same WSFC, there are
additional considerations for using shared storage that is visible only to the local
data center nodes. There are also additional considerations around quorum voting
and the quorum model. This paper discusses these and other considerations.
The availability group can be configured with one or more user databases, and it
can use either synchronous or asynchronous data movement. Synchronous replicas
add latency to the database transactions because the primary needs to receive the
acknowledgement that log records have been hardened to the secondary replica
logs before the primary replica commits the transaction.
It is also important to note that the disaster recovery SQL Server instance does not
need to be an FCI. An availability group could also have a stand-alone SQL Server
instance for the secondary replica. With availability groups, you can mix both standalone instances and FCIs within a single topology on the same WSFC. shows a
mixed topology.
Figure 3: FCI for local HA and availability groups for DR, with the DR instance being a
stand-alone instance
The rest of the paper assumes that both the primary and secondary replicas are
hosted FCIs, and not stand-alone instances.
Asymmetric Storage
Two FCIs, one at each site on a single multi-site WSFC, introduce considerations
around how shared storage is handled. Each FCI has its own shared storage. The
nodes at the primary site share storage among themselves to form a shared-storage
FCI, and the nodes at the DR site share storage among themselves to form another
shared-storage FCI. The storage on the primary site is not visible to the nodes on
the disaster recovery site and vice versa. This arrangement of storage, where a
cluster disk is shared between a subset of nodes within a WSFC, is referred to as
asymmetric storage. Before the asymmetric storage capability, shared storage
needed to be visible to all the nodes in the WSFC (symmetric storage). Asymmetric
storage was introduced as a deployment option for Windows Server 2008 via a
7
hotfix. Asymmetric storage is also supported in Windows Server 2008 R2 via Service
Pack 1. For more information about this hotfix, see the Knowledge Base article
Hotfix to add support for asymmetric storages to the Failover Cluster Management
MMC snap-in for a failover cluster that is running Windows Server 2008 or Windows
Server 2008 R2.
This Windows Server enhancement is the key piece of functionality that enables the
FCI + AG solution architecture discussed in this white paper. By enabling this
functionality, you can combine the shared storage solution (FCI) with the nonshared storage solution (availability groups), in a single HA + DR solution.
Consequently, this enhancement also enables you to use identical drive letters for
shared disk resources across data centers.
Note that when you configure asymmetric storage, you may see a message during
WSFC validation tests that Disk with id XYZ is visible or cluster-able only from a
subset of nodes. For asymmetric storage, this is expected and not a cause for
concern.
This vote assignment translates to a total of 2 votes for the WSFC. As a best
practice, the total number of votes for the WSFC should be an odd number. If there
is an even number of voting nodes (as in our example topology), consider adding a
file share witness, and then choose the Node and File Share Majority quorum model.
Note: In many enterprise environments it is common for a file share to be
owned and managed by a different team. That team then has control over a
node vote, and thus has influence on the status of the failover cluster. A file
share becomes a vote and so it needs to always be available. Clustering or
other HA technologies are recommended in order to ensure the availability of
the file share vote.
Alternatively, you can add an additional node and use the Node Majority quorum
model. The additional node needs to be within the WSFC but it does not need to be
a part of the FCI configuration. It should also be located in the same primary data
center, collocated with the other two WSFC nodes that exist in that data center.
Figure 4 shows the vote allocation using the Node and File Share Majority quorum
model.
In Figure 4, each of the two nodes in the primary data center has a vote. A file share
witness is also defined in the primary data center and also has a vote. The two
nodes in the disaster recovery data center are not given a vote and cannot affect
quorum.
Additional possible quorum model choices for this deployment architecture are Node
and Disk Majority (using an asymmetric disk) or No Majority: Disk Only (using an
asymmetric disk). Before asymmetric storage was available in a WSFC, a shared
disk could act as a quorum resource if it was visible from all the WSFC nodes. With
asymmetric storage, cluster storage can be visible to a subset of nodes and still be
10
used as a quorum resource. With asymmetric No Majority: Disk Only quorum model,
you can implement a last man standing scenario, where the WSFC retains quorum
as long as a single node has contact with the asymmetric disk that is acting as the
quorum resource.
You can enable this using the cluster.exe command line but you cannot enable this
through Failover Cluster Manager or Windows PowerShell. For an example of this
configuration, see the Changing the quorum configuration in a failover cluster with
asymmetric storage section of the article Failover Cluster Step-by-Step Guide:
Configuring the Quorum in a Failover Cluster.
Important: Using an asymmetric disk as the quorum resource provides
numerous benefits, but it also requires a much higher level of cluster
expertise and planning. You should become very familiar with this
configuration before deploying it in a production environment.
In the event of a primary data center outage that requires you to bring up service in
the disaster recovery data center, you must re-evaluate the quorum configuration.
Each node in the disaster recovery data center must be assigned a vote and each
node in the primary data center must have its vote removed (set to 0) until
service is restored. Assuming two nodes for the FCI and a longer-term outage of the
primary data center, you should also configure a file share witness (or other
additional vote) in the DR data center and set the quorum model accordingly. After
the primary data center is ready for activity again, the voting must again be
adjusted and the quorum model re-evaluated. Later in this paper well step through
a disaster recovery scenario and associated process flow.
The quorum model and vote assignments presented in Figure 4 assume that the
solution has two replicasone in each of the two data centers. If you have more
data centers and you plan to put some part of your solution in a third data center,
the quorum model decisions and vote assignments may vary.
Tools to View and Change Quorum Model and Node Votes
There are multiple ways the cluster quorum model and/or the quorum votes can be
viewed and changed. The following table lists the various tools for these tasks.
To view the quorum model
Windows Failover Cluster Manager
Windows PowerShell
Cluster.exe
SQL Server DMVs
AlwaysOn Dashboard in SQL Server
Management Studio
Windows PowerShell
Cluster.exe
SQL Server DMVs
AlwaysOn Dashboard
Windows PowerShell
Cluster.exe
When this query is run on the example that is covered in this white paper, it returns
the following.
cluster_name
-----------contosocluster
12
quorum_type_desc
---------------NODE_AND_FILE_SHARE_MAJORITY
quorum_state_desc
----------------NORMAL_QUORUM
member_name, number_of_quorum_votes
sys.dm_hadr_cluster_members;
When this query is run on the example that is covered in this white paper, it returns
the following. Vote allocation will be covered in a later section.
member_name
-----------------PrimaryNode1
PrimaryNode2
DRNode1
DRNode2
File Share Witness
number_of_quorum_votes
---------------------1
1
0
0
1
The AlwaysOn Dashboard in SQL Server Management Studio can also be used to
display quorum votes and the cluster state. Figure 5 shows this information for a
Windows cluster with the Node Majority quorum model (cluster state and quorum
votes are highlighted).
Figure 5: Displaying quorum votes and cluster state in the AlwaysOn Dashboard
Although the Quorum Votes column is not displayed by default, you can add it to
the dashboard by right-clicking the Availability replica table column header and
then selecting the specific column to display.
For a Node and File Share Majority quorum model, this AlwaysOn dashboard view
shows only the nodes, not the file share. To see the complete quorum information,
on the right, click View Cluster Quorum Information. A pop-up window similar to
Figure 6 appears.
13
Figure 6: Cluster quorum information for Node and File Share Majority Quorum model
Client Connectivity
FCI connection methods are the same in SQL Server 2012 as they were in previous
versions, but for migrations from database mirroring to availability groups, there are
changes that you must consider and plan for before you can use the new readable
secondary functionality. For more information about migration, including in-depth
considerations and steps, see the Migration Guide: Migrating to SQL Server 2012
14
Failover Clustering and Availability Groups from Prior Clustering and Mirroring
Deployments white paper.
Read/Write Workloads
For read/write workloads that run against the availability databases in an availability
group, you can connect to the primary replica using two options. The first option is
to connect directly to the FCI virtual network name (VNN); each replica has a
different FCI VNN. The second option is to use the availability group listener name.
The availability group listener is the preferred option because it provides
transparency and automatic redirection to the current primary replica, and the
name in the connection string stays the same for all instances. The availability
group listener is a VNN that is bound to one or more TCP/IP addresses and listener
ports and is used to automatically connect to any replica without the need to
explicitly designate each possible availability group replica in the connection string.
If you are migrating read/write workload application connections from a database
mirroring solution that uses the Failover Partner attribute, you can still use your
database mirroring connection string, but only if the availability group is configured
with a single secondary replica that is configured for read/write activity. You can
then use the initial primary replica server name as the data source and (optionally)
the secondary replica name as the failover partner. This should not be used as a
long-term solution, however.
Read-Only Workloads
For read-only workload connections, you also have two options available to you. You
can use the FCI VNN or you can use the availability group listener and specify the
new ApplicationIntent attribute in the connection string as ReadOnly.
If you are using a legacy database mirroring connection string, you can only connect
to the availability group as long as the availability group is configured as a single
secondary replica that is configured for read/write activity.
If you want to leverage read-only routing, you must use the availability group
listener name in conjunction with the ApplicationIntent attribute and ReadOnly
value. You must also reference an availability database within the availability group.
The availability group must also be configured for read-only routing to readable
secondary replicas via the creation of read-only routing URLs and read-only routing
lists. For more information about this process, see Configure Read-Only Routing for
an Availability Group (SQL Server).
Multi-Subnet Connection Support
The availability group listener can also leverage the MultiSubnetFailover connection
attribute in client libraries. It is recommended that availability group connection
strings designate the MultiSubnetFailover attribute for multi-subnet topologies when
they reference an availability group listener name. The MultiSubnetFailover
connection option enables support for multi-subnet connections and opens up TCP
15
sockets for the availability group listener IP addresses in parallel. For legacy client
libraries that do not support the MultiSubnetFailover attribute, you should consider
appropriate client login timeout.
For more information about client connectivity and application failover
considerations, see Client Connectivity and Application Failover (AlwaysOn
Availability Groups) in SQL Server Books Online.
Installing Prerequisites
Before you deploy your AlwaysOn Availability Groups solution, it is important to
verify that your system meets requirements, including updates. For more
information about prerequisites for deploying an AlwaysOn Availability Groups
solution, see Prerequisites, Restrictions, and Recommendations for AlwaysOn
Availability Groups (SQL Server). We strongly recommend that you review this topic
before you proceed.
All nodes must have the same version of the Windows Server operating system and
software updates installed. The server operating system should be a minimum of
Windows Server 2008 SP2, or Windows Server 2008 R2 SP1 with at least the
following updates:
16
Step
Database
administrator
Yes
(coordinatio
n of
activities
across roles)
17
Windows
Server \
cluster
administrator
Yes
Yes
Yes
Network
administrator
Step
18
Database
administrator
Windows
Server \
cluster
administrator
Yes
Network
administrator
Yes
Step
Database
administrator
Windows
Server \
cluster
administrator
Yes
Yes
Yes
Yes
Network
administrator
Step
20
Database
administrator
Windows
Server \
cluster
administrator
Network
administrator
Step
Database
administrator
Windows
Server \
cluster
administrator
Network
administrator
22
Database
administrator
Yes
(coordination
of activities
across roles)
23
Windows
Server \
cluster
administrator
Yes
Network
administrator
Yes
Yes
Yes
Yesfor any
issues that may
arise for the
networking of
the nodes
Step
Database
administrator
Windows
Server \
cluster
administrator
Network
administrator
Yes
Yesfor any
issues that may
arise for the
networking of
the nodes
Yes
Yes
Yes
Step
Database
administrator
Yes
25
Yes
Windows
Server \
cluster
administrator
Network
administrator
Yesto
coordinate the IP
address (if you
are using static
IP addresses)
and port
considerations
Step
Database
administrator
Yes
Yes
Windows
Server \
cluster
administrator
Network
administrator
Yesto ensure
that the
endpoint ports
are open and
troubleshooting,
as needed
Step
Database
administrator
Windows
Server \
cluster
administrator
Network
administrator
Yes
Yes
Yesto
coordinate IP
address and port
considerations
Table 2: Building the FCI+AG Solution in the disaster recovery data center
After you have finished these steps, in Windows Failover Cluster Manager you can
see that a new group under Services and Applications is created with the same
name as the availability group. Within that new group youll also find the availability
group listener resource and associated listener IP addresses (see Figure 7: After
configuration of the FCI for HA and AG DR design solution).
27
Figure 7 shows the WSFC view of the deployment. Note that AG Listener in the
figure shows one associated IP address for illustrative purposes; however, two IP
addresses are common for multi-data center topologies.
Note: While the availability group appears as a resource in the WSFC, you
should not attempt to manage it with Failover Cluster Manager or WSFCscoped interfaces. Instead, manage the availability group within the context
of the SQL Server instance via SQL Server Management Studio, Transact-SQL,
or Windows PowerShell. For more information about why you should not use
Failover Cluster Manager or other WSFC-scoped interfaces, see the blog post
DO NOT use Windows Failover Cluster Manager to perform Availability Group
Failover.
Figure 8 shows the deployment in SQL Server Management Studio. The view shows
one of the FCIs and with the AlwaysOn High Availability Object Explorer folder
hierarchy open. In this example, the DR FCI is the secondary replica and the other
FCI is the primary replica. The three availability databases that participate in the
group are listed, along with the name of the availability group listener.
Figure 8: Post-configuration of the FCI for HA and AG DR design solution in SQL Server
Management Studio
Monitoring Considerations
Migrating from an FCI and database mirroring topology to an FCI and availability
group solution will require new methods for monitoring the topology. The methods
28
and tools you can use for monitoring the availability group infrastructure include the
AlwaysOn Dashboard in SQL Server Management Studio, Object Explorer state
information, Policy Based Management policies, new availability group related
performance counters, catalog views, dynamic management views, and an
Extended Events session that tracks recent AlwaysOn DDL related statement
executions, WSFC connectivity issues, failover events, state changes, and redo
thread blocks.
The AlwaysOn Dashboard is a recommended way to quickly view the health of a
specific availability group. In it you can see the location of the primary instance, the
failover mode of the replicas, the synchronization state of the replicas, and the
failover readiness of the various replicas. You can also access the AlwaysOn Health
Events Extended Events session data directly from the dashboard in order to view
recent availability group activity, state changes, and events.
Additionally you can create SQL Server Agent alerts and job responses based on
performance counter thresholds and availability group state changes. For more
information and guidance regarding the monitoring of an availability group
environment, see Monitoring of Availability Groups.
In any of these scenarios, certain actions are needed at the disaster recovery data
center to resume SQL Server service to the applications.
Figure 10 shows the Cluster Quorum Information window for this scenario (this
information is accessible from the AlwaysOn Dashboard and the View Cluster
Quorum Information link). It shows the quorum before a disaster, where both DR
nodes have zero votes.
The following workflow specifies the steps needed to recover an availability group in
the disaster recovery data center in the event of a primary data center outage:
1. Force quorum on one of the DR nodes, and ensure that the nodes in the primary
data center do not form their own quorum.
Failover Cluster Manager launched on a disaster recovery node is not likely to
provide useful information (initially) on the state of the WSFC because the
cluster no longer has quorum.
30
Figure 11: Failover Cluster Manager after a disaster and before recovery
Because the FCIs are dependent on a functioning WSFC, they are inaccessible
unless both a cluster quorum and the cluster service are running. For a scenario
where the primary data centers status is uncertain and service must be restored
from the DR secondary data center in order to conform to business recovery
time objectives, you need to force quorum on one of the DR nodes.
The following Windows PowerShell command demonstrates how to force quorum
on one of the DR nodes.
Start-ClusterNode Name "DRNODE1" FixQuorum
After you execute this command, you should see something similar to the
following.
Name
------drnode1
State
-------Joining
Note: If the cluster service is still running on DRNODE1, you can use
the following command in Windows PowerShell to stop the service
before you start the cluster service again with force quorum::
Stop-ClusterNode Name "DRNODE1"
For additional tools you can use to force quorum, such as cluster.exe or Failover
Cluster Manager, see Force a WSFC Cluster to Start Without a Quorum.
2. Open Failover Cluster Manager to see the status of the Windows cluster. At this
point, the Windows cluster should be up in the forced quorum state, and the
secondary FCI should be up. The primary data center FCI will still be offline, as
will the availability group resources.
31
32
Figure 13: SQL Server Management Studio Object Explorer after forcing quorum
33
After it comes back online, new connections to the availability group listener
route automatically to the current primary replica, which is now hosted by the
disaster recovery FCI.
Also note that you will still see various warning messages about the primary
data center nodes being unavailable in SQL Server Management Studio.
Figure shows an example of what this may look like.
34
4. From a DR WSFC node, remove votes from the primary data center nodes and
give votes to DR data center nodes. Votes can be removed even though the
primary data center nodes are not available. The two nodes assigned a weight of
1 are the DR WSFC nodes.
(Get-ClusterNode "DRNode1").NodeWeight=1
(Get-ClusterNode "DRNode2").NodeWeight=1
(Get-ClusterNode "PrimaryNode1").NodeWeight=0
(Get-ClusterNode "PrimaryNode2").NodeWeight=0
Note: If the DR site needs to be used for a longer period of time, it is
recommended that additional voting members (WSFC node or file
share) be added.
Before continuing, you can validate that the node votes were modified as
intended by using the following Windows PowerShell command.
Get-ClusterNode | fl NodeName, NodeWeight
As mentioned earlier in the paper, large enterprise environments typically have a
separation of duties among the database administrator, Windows Server (or cluster)
administrator, and network administrator roles. The following table recaps the
previously described disaster recovery workflow, indicating which areas typically fall
under the various enterprise roles from a planning perspective.
35
Step
Database
administrator
Yes
Windows Server \
cluster
administrator
Yes
Network
administrator
Yes
Yes
Yes
Yes
36
Figure 16: SQL Server Management Studio after primary FCI recovery but before resuming
the availability group
The DR site SQL Server instance (in our example SQLFCIDR\DC2) is still the
primary replica. Also notice the pause symbol by each availability database under
the Availability Databases folder.
At this point you should evaluate whether you need to salvage data (that is, the
data changes that were made in the original primary replica, but were unsent to the
DR replica just prior to the disaster), or move forward instead with reestablishing
the replica sessions.
Caution: Resuming the availability group replicas at this point may cause data loss,
so if data loss is not acceptable, the data must be salvaged before data movement
is resumed. Conversely, not resuming the availability group causes the transaction
log files to keep growing on the DR replica databases.
One method to do this would be to create a database snapshot on the suspended
secondary databases (original primary) for the purpose of extracting the data
needed in order to resynchronize with the DR replica version of the availability
databases. The following example demonstrates how to create a database snapshot
on a not synchronizing availability database.
-- Create the database snapshot
37
role_desc,
synchronization_health_desc
FROM sys.dm_hadr_availability_replica_states;
5. To fail over from the disaster recovery data center FCI to the former primary
data center FCI, connect and execute the following script on the primary data
center FCI, which will become the new primary replica.
ALTER AVAILABILITY GROUP [AG1] FAILOVER;
6. If your topology uses high-performance mode, as mentioned earlier, change
the disaster recovery FCI replica node back to asynchronous commit. Execute
the following Transact-SQL on the primary replica.
USE [master]
GO
ALTER AVAILABILITY GROUP [AG1]
MODIFY REPLICA ON N'SQLFCIDR\INST_B' WITH
(AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT);
GO
USE [master]
GO
ALTER AVAILABILITY GROUP [AG1]
MODIFY REPLICA ON N'SQLFCIPrimary\INST_A' WITH
(AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT);
GO
7. Remove quorum votes from the disaster recovery nodes.
39
The following table recaps the previously described disaster recovery workflow,
indicating which areas typically fall under the various enterprise roles from a
planning perspective.
Step
Database
administrator
Windows Server \
cluster
administrator
Yes
Network
administrator
Yes
Yes
Yes
Yes
Yes
Yes
Conclusion
SQL Server 2012 AlwaysOn Availability Groups can be used to replace database
mirroring in topologies using FCIs for high availability and database mirroring for
disaster recovery. This design pattern extends the capabilities beyond what was
offered in earlier versions, allowing for a multi-database unit of failover, read-only
replicas, and more. The intent of this white paper was to present a new HA and DR
40
solution using AlwaysOn FCIs and AlwaysOn Availability Groups to replace the
legacy architecture.
Successful deployment of such an HA/DR solution involves not just the DBA team,
but close collaboration between the DBA team, Windows Server administration
team, and the networking team in the IT organization. Cross-education of skills is
very valuable when you deploy the HA/DR solution.
References
SQL Server 2012 AlwaysOn High Availability and Disaster Recovery Design
Patterns (http://go.microsoft.com/fwlink/?LinkId=255048)
Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster
Recovery (http://msdn.microsoft.com/library/hh781257.aspx)
AlwaysOn Failover Cluster Instances
(http://technet.microsoft.com/library/ms189134.aspx)
Overview of AlwaysOn Availability Groups
(http://technet.microsoft.com/library/ff877884(v=SQL.110).aspx)
Failover Clustering and AlwaysOn Availability Groups
(http://technet.microsoft.com/library/ff929171.aspx)
Prerequisites, Restrictions, and Recommendations for AlwaysOn Availability
Groups (http://technet.microsoft.com/library/ff878487(v=sql.110).aspx)
Failover Cluster Step-by-Step Guide: Configuring the Quorum in a Failover Cluster
(http://technet.microsoft.com/library/cc770620(v=WS.10).aspx)
Windows Server hotfix for quorum votes
(http://support.microsoft.com/kb/2494036)
Windows PowerShell (http://technet.microsoft.com/library/bb978526)
Mapping Cluster.exe Commands to Windows PowerShell Cmdlets for Failover
Clusters (http://technet.microsoft.com/library/ee619744(v=WS.10).aspx)
Windows PowerShell Survival Guide
(http://social.technet.microsoft.com/wiki/contents/articles/183.windowspowershell-survival-guide-en-us.aspx)
Failover Cluster Cmdlets in Windows PowerShell
(http://technet.microsoft.com/library/ee461009.aspx)
SQL Server PowerShell (http://msdn.microsoft.com/en-us/library/hh245198.aspx)
41
Did this paper help you? Please give us your feedback. Tell us on a scale of 1 (poor) to 5
(excellent), how would you rate this paper and why have you given it this rating? For example:
Are you rating it high due to having good examples, excellent screen shots, clear writing,
or another reason?
Are you rating it low due to poor examples, fuzzy screen shots, or unclear writing?
This feedback will help us improve the quality of white papers we release.
Send feedback
42