0% found this document useful (0 votes)
52 views28 pages

NOV19

This document discusses server load balancing. It explains that load balancing is needed to distribute traffic across multiple servers as website traffic increases. Load balancing helps ensure high performance and scalability. Common load balancing techniques discussed include IP spraying, round robin allocation, and weighted round robin allocation. IP spraying distributes requests randomly across servers, while round robin sends each new request to the next server in line. Weighted round robin assigns weights to servers based on their capacity. The document also mentions hardware and software load balancers, and how failover can be used to maintain throughput if a server fails.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views28 pages

NOV19

This document discusses server load balancing. It explains that load balancing is needed to distribute traffic across multiple servers as website traffic increases. Load balancing helps ensure high performance and scalability. Common load balancing techniques discussed include IP spraying, round robin allocation, and weighted round robin allocation. IP spraying distributes requests randomly across servers, while round robin sends each new request to the next server in line. Weighted round robin assigns weights to servers based on their capacity. The document also mentions hardware and software load balancers, and how failover can be used to maintain throughput if a server fails.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 28

Server Load Balancing

Introduction
Why is load balancing of servers needed?
If there is only one web server responding to all the incoming
HTTP requests for your website, the capacity of the web
server may not be able to handle high volumes of incoming
traffic once the website becomes popular.
The website's pages will load slowly as some of the users
will have to wait until the web server is free to process their
requests.
The increase in traffic and connections to your website can
lead to a point where upgrading the server hardware will no
longer be cost effective.

In order to achieve web server scalability, more servers


need to be added to distribute the load among the group
of servers, which is also known as a server cluster.
The load distribution among these servers is known as
load balancing.
Load balancing applies to all types of servers
(application server, database server), however, we will
be devoting this section for load balancing of web
servers (HTTP server) only.

Load balancing mechanism - IP Spraying


When multiple web servers are present in a
server group, the HTTP traffic needs to be
evenly distributed among the servers.
In the process, these servers must appear as
one web server to the web client, for example an
internet browser.
The load balancing mechanism used for
spreading HTTP requests is known as IP
Spraying.

IP sprayer
TCP/IP load balancing is also called
Network Dispatch, or IP |spraying.

The equipment used for IP spraying is also


called the 'load dispatcher' or 'network
dispatcher' or simply, the 'load balancer'.
In this case, the IP sprayer intercepts each
HTTP request, and redirects them to a server in
the server cluster.
Depending on the type of sprayer involved, the
architecture can provide scalability, load
balancing and failover requirements.

Server Load Balancing: Algorithms


Types of load balancing
Load balancing of servers by an IP sprayer can
be implemented in different ways.
These methods of load balancing can be set up
in the load balancer based on available load
balancing types.
There are various algorithms used to distribute
the load among the available servers.

Random Allocation
In a random allocation, the HTTP requests are
assigned to any server picked randomly among
the group of servers.
In such a case, one of the servers may be
assigned many more requests to process, while
the other servers are sitting idle.
However, on average, each server gets its share
of the load due to the random selection.

Pros: Simple to implement.


Cons: Can lead to overloading of one
server while under-utilization of others.

Round-Robin Allocation
In a round-robin algorithm, the IP sprayer assigns the
requests to a list of the servers on a rotating basis.
The first request is allocated to a server picked randomly
from the group, so that if more than one IP sprayer is
involved, not all the first requests go to the same server.
For the subsequent requests, the IP sprayer follows the
circular order to redirect the request.
Once a server is assigned a request, the server is moved
to the end of the list. This keeps the servers equally
assigned.

It is a method to pass the objects of each server out by round


robin scheme when objects of each server are managed in
the name server, and the object acquisition request is
received from the client.
As a result, the operation request from the client is carried out on
the machine that the objects passed out from the name server were
registered.
In addition, to limit the access to the pertinent machine at the time of
failure, a mechanism (WebOTX WatchServer) that deletes the
object of the pertinent machine from the round robin list of the name
server.
This allows you to safely degrade the system. even when
unexpected failure occurs.

Methods of load balancing


There are various ways in which load balancing
can be achieved.
The deciding factors for choosing one over the
other depends on the requirement, available
features, complexity of implementation, and
cost.
For example, using a hardware load balancing
equipment is very costly compared to the
software version.

Round Robin DNS Load Balancing


The in-built round-robin feature of BIND of a DNS
server can be used to load balance multiple web
servers.
It is one of the early adopted load balancing techniques to
cycle through the IP addresses corresponding to a group
of servers in a cluser.
Pros: Very simple, inexpensive and easy to implement.
Cons: The DNS server does not have any knowledge of
the server availability and will continue to point to an
unavailable server.
It can only differentiate by IP address, but not by server port.
The IP address can also be cached by other nameservers and
requests may not be sent to the load balancing DNS server.

Hardware Load Balancing


Hardware load balancers can route TCP/IP
packets to various servers in a cluster.
These types of load balancers are often found to
provide a robust topology with high availability, but
comes for a much higher cost.

Pros:
Uses circuit level network gateway to
route
traffic
Cons:
Higher costs compared to software
versions.

Software Load Balancing


Most commonly used load balancers are
software based, and often comes as an
integrated component of expensive web server
and application server software packages.
Pros: Cheaper than hardware load balancers. More
configurable based on requirements. Can incorporate
intelligent routing based on multiple input parameters.
Cons: Need to provide additional hardware to isolate
the load balancer.

Failover

WebOTX allows a fail-over configuration to be constructed by linking


software(Ex. MC/ServiceGuard, CLUSTERPRO, SunCluster etc.)
implementing fail-over.
In a cluster configuration with fail-over, active and standby servers are
prepared. The active server is used during normal operation. If an error
occurs in the active server, the standby server takes over the operation
(performs fail-over).
The standby server does not perform operation in the normal status. For this
reason, when a cluster configuration with fail-over includes two machines,
one server performs operation. In a mutual standby configuration, the two
servers perform different processing and enter the standby state alternately.

If an error occurs in the active server, the standby server


takes over the operation.
This keeps the system throughput if an error occurs.
For example, when a system consists of four machines (three
active and one standby servers), three servers perform operation
in the normal status.

If an error occurs in one of the active servers and fail-over


is performed, the standby server takes over the operation.
Therefore, the system operation can also be performed
with three servers.
In a mutual standby configuration, four servers are used
during normal operation and the system is degraded to
three servers if an error occurs.
The system throughput is also degraded.

Summary
Pros: Better than random allocation because the
requests are equally divided among the
available servers in an orderly fashion.
Cons: Round robin algorithm is not enough for
load balancing based on processing overhead
required and if the server specifications are not
identical to each other in the server group.

Weighted Round-Robin Allocation


Weighted Round-Robin is an advanced version of the
round-robin that eliminates the deficiencies of the plain
round robin algorithm.
In case of a weighted round-robin, one can assign a
weight to each server in the group so that if one server is
capable of handling twice as much load as the other, the
powerful server gets a weight of 2.
In such cases, the IP sprayer will assign two requests to
the powerful server for each request assigned to the
weaker one.

Pros: Takes care of the capacity of the


servers in the group.
Cons: Does not consider the advanced
load balancing requirements such as
processsing times for each individual
request.

Dynamic load balancing


In the round robin load balancing (including weighting),
the server that is connected by clients is decided when
acquiring objects.
The same server is accessed as long as you keep using
the objects acquired at the client.
The load may temporarily concentrate on one machine
depending on the operation of the application.

To deal with the above-mentioned problem, the dynamic


load sharing is supported for each operation in WebOTX.
As a result, the load is distributed to each server in round
robin every time the client side calls the operation.
If the call at the destination failed, the running of
operation is guaranteed by removing the destination from
the round robin list and switching to another destination.
The destination removed from the round robin list is
promptly added to the round robin list after recovery, by
the regular monitoring.

You might also like