0% found this document useful (0 votes)
11 views

Lec7 English

Uploaded by

malek771430311
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Lec7 English

Uploaded by

malek771430311
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

SOURCES: Configuring Advanced Windows Server® 2012 Services

CCNA Data Center DCICT 640-916 Official Cert Guide.

‫عبدالملك الحميري‬/‫د‬
Introduction

 Load balancing is a term that describes a method to distribute incoming socket


connections to different servers. It’s not distributed computing, where jobs are
broken up into a series of sub-jobs, so each server does a fraction of the
overall work. It’s not that at all. Rather, incoming socket connections are
spread out to different servers. Each incoming connection will communicate
with the node it was delegated to, and the entire interaction will occur there.
Each node is not aware of the other nodes existence.

2
Introduction

 In computing, load balancing improves the distribution of workloads across


multiple computing resources, such as computers, a computer cluster, network
links, central processing units, or disk drives. Load balancing aims to
optimize resource use, maximize throughput, minimize response time, and
avoid overload of any single resource. Using multiple components with load
balancing instead of a single component may increase reliability and
availability through redundancy. Load balancing usually involves dedicated
software or hardware, such as a multilayer switch or a Domain Name System
server process.

3
Why load balancing is employed

 There are at least two reasons why load balancing is employed:

 The required capacity is too large for a single machine. When running
processes that consume a large amount of system resources (e.g. CPU and
memory), it often makes sense to employ multiple servers to distribute the
work instead of constantly adding capacity to a single server. In plenty of
cases, it’s not even possible to allocate enough memory or CPU to a single
machine to handle all of the work! Load balancing across multiple servers
makes it possible to host high traffic websites or run complex data
processing jobs that demand more resources than a single server can deliver.

4
why load balancing is employed

 Looking for more reliability and flexibility in a solution deployment.


Even if you could run an entire server application on a single server, it may
not be a good idea. Load balancing can increase reliability by providing
many servers able to do the same job. If one server becomes unavailable,
the others can simply pick up the additional work until a new server comes
online. Software updates become easier since a server can simply be taken
out of the load balancing pool when a patch or reboot is necessary. Load
balancing gives system administrators more flexibility in maintaining servers
without negatively impacting the application as a whole.

5
Redundancy

 Servers crash, this is the rule, not the exception. Your architecture should be
devised in a way to reduce or eliminate single points of failure (SPOF). Load
balancing a cluster of servers that perform the same role provides room for a
server to be taken out manually for maintenance tasks, without taking down the
system. You can also withstand a server crashing. This is called High
Availability, or HA for short. Load balancing is a tactic that assists with High
Availability, but is not High Availability by itself. To achieve high availability,
you need automated monitoring that checks the status of the applications in your
cluster, and automates taking servers out of rotation, in response to failure
detected. These tools are often bundled into Load Balancing software and
appliances, but sometimes need to be programmed independently
6
Network Load Balancing

 NLB is a scalable, high availability feature that you can install on all editions
of Windows Server 2012. A scalable technology is one where you can add
additional components (in this case additional cluster nodes) to meet increasing
demand. A node in a Windows Server 2012 NLB cluster is a computer, either
physical or virtual, that is running the Windows Server 2012 operating
system.

7
Network Load Balancing

 Windows Server 2012 NLB clusters can have between 2 and 32 nodes. When
you create an NLB cluster, it creates a virtual network address and virtual
network adapter. The virtual network adapter has an IP address and a media
access control (MAC) address. Network traffic to this address is distributed
evenly across the nodes in the cluster. In a basic NLB configuration, each node
in an NLB cluster will service requests at a rate that is approximately equal to
that of all other nodes in the cluster. When an NLB cluster receives a request, it
will forward that request to the node that is currently least utilized. You can
configure NLB to preference some nodes over others.

8
Network Load Balancing

 NLB is failure-aware. This means that if one of the nodes in the NLB cluster
goes offline, requests will no longer be forwarded to that node, but other nodes
in the cluster will continue to accept requests. When the failed node returns to
service, incoming requests will be redirected until traffic is balanced across all
nodes in the cluster.

9
How NLB Works

 When you configure an application to use NLB, clients address the application
using the NLB cluster address rather than the address of nodes that participate
in the NLB cluster. The NLB cluster address is a virtual address that is shared
between the hosts in the NLB cluster.

 NLB directs traffic in the following manner: All hosts in the NLB cluster
receive the incoming traffic, but only one node in the cluster, which is
determined through the NLB process, will accept that traffic. All other nodes in
the NLB cluster will drop the traffic.

10
How NLB Works

11
How NLB Works

 Which node in the NLB cluster accepts the traffic depends on the configuration of
port rules and affinity settings. Through these settings, you can determine if traffic
that uses a particular port and protocol will be accepted by a particular node, or
whether any node in the cluster will be able to accept and respond.

 NLB also sends traffic to nodes based on current node utilization. New traffic is
directed to nodes that are being least utilized. For example, if you have a four node
cluster where three of the nodes are responding to requests from 10 clients and one
node is responding to requests from 5 clients, the node that has fewer clients will
receive more incoming traffic until utilization is more evenly balanced across the
nodes.

12
How NLB Works with Server Failures and Recovery

 NLB is able to detect the failure of cluster nodes. When a cluster node is in a
failed state, it is removed from the cluster, and the cluster does not direct new
traffic to the node. Failure is detected by using heartbeats. NLB cluster
heartbeats are transmitted every second between nodes in a cluster. A
node is automatically removed from a NLB cluster if it misses five
consecutive heartbeats. Heartbeats are transmitted over a network that is
usually different from the network that the client uses to access the cluster.
When a node is added or removed from a cluster, a process known as
convergence occurs. Convergence allows the cluster to determine its current
configuration.

13
Deployment Requirements for NLB

 NLB requires that all hosts in the NLB cluster reside on the same TCP/IP
subnet. Although TCP/IP subnets can be configured to span multiple
geographic locations, NLB clusters are unlikely to achieve convergence
successfully if the latency between nodes exceeds 250 milliseconds (ms).
When you are designing geographically dispersed NLB clusters, you should
instead choose to deploy an NLB cluster at each site, and then use Domain
Name System (DNS) round robin to distribute traffic between sites.

14
Deployment Requirements for NLB

 All network adapters within an NLB cluster must be configured as either


unicast or multicast. You cannot configure an NLB cluster where there is a
mixture of unicast and multicast adapters. When using unicast mode, the
network adapter must support changing its MAC address.

 You can only use TCP/IP protocol with network adapters that participate in
NLB clusters. NLB supports IPv4 and IPv6. The IP addresses of servers that
participate in an NLB cluster must be static and must not be dynamically
allocated. When you install NLB, Dynamic Host Configuration Protocol
(DHCP) is disabled on each interface that you configure to participate in the
cluster.

15
Deployment Requirements for NLB

 All editions of Windows Server 2012 support NLB. Microsoft supports NLB
clusters with nodes that are running different editions of Windows Server
2012. However, as a best practice, NLB cluster nodes should be computers
with similar hardware specifications, and that are running the same
edition of the Windows Server 2012 operating system.

16
What Is Server Load Balancing?

 Server load balancing means to spread the workload among the servers
hosting the same application content. Server Load Balancer (SLB) is a device
that performs the load-balancing function. A load balancer receives the client
request, analyzes the information in the request, and based on the load
balancing algorithm, it divides the load appropriately between the servers.
Load balancing can be done for client-to-server traffic, and for server-to-server
traffic. Figure 19-1 shows an example of a typical load-balancing setup.

17
What Is Server Load Balancing?

18
What Is Server Load Balancing?

 When a load balancer receives multiple requests from clients for servers in a
load-balanced farm, it distributes the requests evenly across all servers in the
farm. Figure 19-2 shows an example where there are three servers in a data
center. There is a load balancer between the clients and the servers, and this
load balancer is configured for round-robin load balancing. The load balancer
receives three requests from the clients. It divides these requests evenly among
the three servers. Therefore, all three servers are equally busy serving the
clients. The Cisco Application Control Engine (ACE) is an example of a
network-based load balancer that can perform this task.

19
What Is Server Load Balancing?

20
Load balance algorithm

 Round robin: a batch of servers are programmed to handle in a rotating


sequential manner. The algorithm assumes that each device is able to process
the same number of requests and isn’t able to account for active connections.

 Weighted round robin: Server are rated based on the relative amount of
requests each is able to process. Those having higher capacities are sent more
requests.

 Least connections: Requests are sent to the server having the fewest number
of active connections, assuming all connections generate an equal amount of
server load.

21
Load balance algorithm

 Weight least connections: Servers are rated based on their processing


capabilities. Load is distributed according to both the relative capacity of the
servers and the number of active connections on each one.

 Source IP hash: Combines the source and destination IP address in a request


to generate a hash key, which is then designated to a specific server. This lets a
dropped connection be returned to the same server originally handling it.

 Least latency: Makes a quick HTTP options request to backend server, and
sends the request to the first server to answer.

22
Benefits of Server Load Balancing

Following are the benefits of the server load-balancing solution:

 Scalability: Scalability is one of the key benefits of a server load-balancing


solution. As you have seen earlier, the load-balancing solution receives a client
request for an application and divides the load among the servers running this
application. Based on your business requirements, you can start with a few
servers and set up a load-balancing environment for them. As your business
grows and the load on your existing servers increases, you can seamlessly add
more servers to the solution. The load balancer will include these new servers
into the load-balancing calculation and start sharing the load with them. This
approach allows high scalability because a large number of servers can be added to
the solution without any outage.
23
Benefits of Server Load Balancing

 Increased Capacity: Server load balancing enables you to increase the


capacity of your application so it can deal with peak load from the client.
Typically, an application server can support only a limited number of
connections from the client. A load-balancing solution helps you build an
infrastructure that can scale out and support a greater number of connections;
hence it provides increased capacity for your business applications. This
increase in capacity can be done seamlessly by adding more servers in the
solution.

24
Benefits of Server Load Balancing

 High Availability: In a load-balanced server farm, many servers are serving the
client requests. Therefore, if one server in the farm goes down, the load
balancer can shift the load to an alternate server. The load balancer also
performs a health check for each server in the server farm. If a server is not
healthy it can be removed from the service to avoid interruption to the clients.

 Disaster Recovery: A global load-balancing solution enables you to put your


servers across multiple sites. This feature is commonly used in disaster recovery
(DR) solutions, where clients can continue accessing the service during the disaster
using the same server name.

25
Benefits of Server Load Balancing

 Security: In a load-balancing environment, all the client requests are first received by a
load balancer; therefore, the load balancer can perform certain security checks
before forwarding these requests to the servers. One of the important security
features that load balancers can apply is the access control list (ACL), where you can
allow or deny traffic based on the information in the client request. Another example of
a security task that load balancers typically perform is application protocol inspection,
which helps to verify the protocol behavior and identify unwanted or malicious traffic
that passes through the load balancer. You can define policies to accept or reject the
packets to ensure the secure use of applications and services. Load balancers can also
perform IP and TCP normalization, which protects against a variety of network attacks
by performing general security checks, within IP and TCP headers.

26
Benefits of Server Load Balancing

 Operational Efficiency: In a production data center, operators typically


perform maintenance tasks on the servers. These tasks could be related to
applications or hardware issues for which the server might need interruption.
In a load-balancing solution, the ability to add and delete servers without
any outage provides operational efficiency. You can bring down a server and
perform maintenance without impacting the service.

27
Methods of Server Load Balancing

The following methods are used in the industry to perform server load balancing:

 DNS-based load balancing: This is an early method of performing server load


balancing. In this method, DNS name resolution is used to distribute the load
among the servers. DNS server A records are configured with multiple IP addresses
for a name. Name resolution is done in a round-robin fashion, distributing the load
among all the servers defined in DNS. There are several disadvantages of this
approach:

 DNS server has no knowledge of server health. It can resolve the name to an
IP even if the server is down.
 DNS caching works against the load balancing.
 It supports only a very basic method of load balancing.
28
Methods of Server Load Balancing

 IP Anycast: A simple and effective way to perform load balancing of traffic


between the geographic locations. In this method IP routing techniques are
used to redirect traffic to the server in the nearest data center location.
This is achieved by announcing the same IP subnet from multiple locations
within a network. The routers then send the traffic to the preferred location
based on their routing table.

29
Methods of Server Load Balancing

 Server-based load-balancing solutions: In this solution, the load-balancing function


is performed by the server itself. This can be achieved using features such as load-
balancing server clusters or special software that runs on the server and performs the
load balancing of traffic between the servers.

 Network-based load-balancing solutions: Server load balancing can also be achieved


within the network. In this solution servers can perform the application function while
the network can take care of the load balancing. This approach takes advantage of
hardware acceleration within the network device; hence it improves the performance of
the solution. Network-based load balancing services are deployed at the
aggregation layer of the data center. These load balancers could be a router with the
IOS SLB feature or a specialized switch, such as Cisco Application Control Engine
(ACE).
30
Load Balance Type

 Network layer algorithms:

31
Load Balance Type

 Application layer algorithm:

32

You might also like