0% found this document useful (0 votes)
20 views

DS T1 Report - Load Balancing in Cloud Computing

The document summarizes two papers related to load balancing techniques in cloud computing. The first paper proposes a fault-tolerant load balancing model that uses two load balancers and machine learning to predict failures. It aims to improve migration techniques and reduce response times. The second paper introduces the LBMPSO technique, which uses a modified particle swarm optimization algorithm for task scheduling to minimize makespan and balance loads across virtual machines.

Uploaded by

kunjanpbharade
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

DS T1 Report - Load Balancing in Cloud Computing

The document summarizes two papers related to load balancing techniques in cloud computing. The first paper proposes a fault-tolerant load balancing model that uses two load balancers and machine learning to predict failures. It aims to improve migration techniques and reduce response times. The second paper introduces the LBMPSO technique, which uses a modified particle swarm optimization algorithm for task scheduling to minimize makespan and balance loads across virtual machines.

Uploaded by

kunjanpbharade
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Load Balancing in Cloud Computing

Name -Sakshi Abbu


C No-C22020441603
Roll No-4603
BTech.IT

Paper 1: Load balancing techniques in cloud computing environment:

The paper addresses challenges in load balancing in cloud computing, including the
need to reduce response time, ensure fault tolerance, improve task migration, minimize
waiting times, and implement energy-aware task allocation for critical applications.

Cloud computing provides diverse services, but load balancing is a common challenge
affecting performance and compliance. This paper reviews Load Balancing techniques
in static, dynamic, and nature-inspired cloud environments to optimize Data Center
Response Time and overall performance. It identifies research gaps, introduces a
fault-tolerant framework, and explores existing frameworks in recent literature.

Existing common load-balancing algorithms mentioned in the paper-


Load-balancing algorithms in cloud computing are categorized into static, dynamic, and
nature-inspired types. Static algorithms rely on prior knowledge of the system state,
while dynamic algorithms adapt to current conditions, offering flexibility but may incur
higher overhead. Nature-inspired algorithms model biological processes for load
balancing, offering intelligence for complex systems. Dynamic algorithms are preferred
for distributed cloud systems due to their efficiency and elimination of storage
overhead, while nature-inspired algorithms provide intelligence but may have higher
runtime complexity.

Algorithm Type Description Real Use Current Past Future


Cases Trends Trends Trends
Name

Min-Min Static Minimizes Grid High High Moderat


completion computing, e
time cloud

Round Static Assign Web servers, High High Moderat


Robin tasks in DNS e
rotation

Weighted Static Assign Web servers, High High Moderat


Round tasks based CDN(Content e
robin on weight Delivery
Networks)

Throttled Dynamic Limits Web servers, High Moderate Moderat


request API e
rates

Equally Dynamic Spreads Web servers, High Moderate High


Spread load evenly cloud
Current
Execution

Least Dynamic Directs Load Moderate High Moderat


Connection traffic to the balancers, e
least proxies
connection
s

Honey-Bee Nature-inspired Mimics bee Swarm Moderate Low Moderat


behavior in robotics, task e
task scheduling
allocation

Ant Colony Nature-inspired Emulates Network High Moderate High


ant foraging routing,
behavior for logistics
routing

Particle Nature-inspired Simulates Function High Moderate High


Swarm bird optimization,
flocking data clustering
behavior for
optimizatio
n
Genetic Nature-inspired Utilizes Genetic Moderate Moderate High
Algorithm principles programming,
of natural feature
selection selection
for
optimizatio
n

Proposed Framework Table:(fault-tolerant model)


Aspect Description

Dual Load Balancers Utilization of two Load Balancers for


redundancy and continuous operation in
case of failure.
1. Active Load Balancer-Primary
balancer managing traffic,
optimizing resources, monitoring
server health, and triggering
failover.
2. Passive Load Balancer-Backup
balancer activated if primary fails,
ensuring continuous service
delivery and fault tolerance

Machine Learning Tools Integration of predictive analytics to


anticipate failures in the active Load
Balancer.

Fault Tolerance Enhancement Aim to enhance fault tolerance


capabilities in load balancing systems.

Improved Migration Techniques Optimization of task migration processes


for efficient workload balancing.

Addressed Gaps and Solutions Table:


Identified gap How the Paper Overcame the Gap

Lack of Fault Tolerance Focus Proposed a fault-tolerant model with dual


Load Balancers and predictive analytics
for failure prediction.

Inadequate Migration Techniques Introduced improved migration


techniques in the framework to optimize
task movement and workload balancing.

Insufficient Focus on Response Time Emphasized the need for efficient load
Optimization balancing to reduce response times and
enhance system performance.

Limited Consideration of Waiting Times Addressed the challenge of waiting times


by optimizing task allocation and
resource utilization.

Conclusion:

The proposed fault-tolerant model introduces a novel approach to enhance fault


tolerance, improve migration techniques, and prevent node failures in load-balancing
systems.By leveraging dual Load Balancers and predictive analytics, the system is able
to optimize reliability, and performance in cloud computing environments.
Paper 2: A novel load balancing technique for cloud computing platform based
on PSO

The paper "A novel load balancing technique for cloud computing platform based on
PSO" addresses the challenge of load balancing and task scheduling in cloud
computing environments. Load balancing involves sharing tasks among multiple
machines to speed up job completion and ensure VMs perform well. The primary focus
is on optimizing resource utilization by minimizing makespan and balancing the load
among virtual machines (VMs) through efficient task scheduling.

Before the proposed work, existing systems encountered limitations like inefficient
resource usage, lengthy makespan, and suboptimal load balancing algorithms. Past
approaches, including PSO-based task scheduling, load-balancing algorithms like
PSOBTS and L-PSO, and dynamic load balancing, aimed to address these issues.

PSO:

PSO is a metaheuristic optimization algorithm inspired by the


social behavior of bird flocking.PSO starts working after the
initialization of the virtual machine. It checks the availability of
resources for task assignment. If a resource is available, it's
assigned to the task. PSO calculates the total execution time and
resource utilization. Each particle in PSO represents a solution at
its individual position. The velocity of each particle changes, and
their positions are recomputed. This process repeats until an
advanced optimized solution is found.

The paper introduces the LBMPSO technique (Load Balancing


Modified Particle Swarm Optimization)as a novel approach to improve load balancing in
cloud computing platforms. LBMPSO utilizes a modified PSO algorithm for task
scheduling, evaluating the ideal arrangement of tasks among VMs using a fitness
function based on execution times. By optimizing task allocation and load balancing,
LBMPSO aims to reduce makespan time, increase resource utilization, and balance the
load in each VM effectively.
LBMPSO:

The mapping is done with PSO to focus on


resource allocation strategy, ensuring no task
is left in the buffer and total execution time is
optimized. The model is partitioned into
different buffers containing various tasks and
resource information to optimize makespan
and resource allocation strategies. The
proposed model schedules upcoming tasks
based on available cloud resources and
calculates fitness values for each particle.
Pbest and Gbest values are updated during
execution based on current fitness values and best positions. The resource manager
regularly checks VM status and informs the task scheduler. The task scheduler decides
task allocation to VMs and maintains load balance. This process continues until all
tasks are scheduled to available VMs using adaptive load balancing approach.

By implementing LBMPSO using the CloudSim simulator, the paper demonstrates


significant improvements in reducing makespan time and enhancing resource utilization
compared to existing techniques like PSOBTS, L-PSO, and DLBA. The proposed
technique effectively overcomes the limitations of previous systems by providing a
more efficient and optimized solution for load balancing and task scheduling in cloud
computing environments.

Comparison of PSO & LBMPSO-


PSO LBMPSO

Metaheuristic inspired by social behavior Modified for load balancing in cloud


computing

Optimizes task allocation in cloud Balances loads among VMs for optimal
computing resource utilization

Minimizes makespan and maximizes Focuses on makespan, resource


utilization utilization, and load balance
Comparison of Limitations in Load Balancing Techniques:
Limitation Standard Load Balancing Modified Load Balancing Techniques
Techniques (LBMPSO)

Resource Inefficient utilization of resources Improved resource utilization efficiency


Utilization

Makespan High makespan time Reduced makespan time

Load Lack of optimization in load Enhanced load balancing optimization


Balancing balancing algorithms

Result:

Conclusion:
The LBMPSO technique effectively reduces makespan, increases resource utilization,
and balances loads in cloud computing environments, outperforming existing
methods.Future work aims to enhance the quality of service parameters for further
improvements.
Paper 3: Hybridization of meta-heuristic algorithm for load balancing in cloud
computing environment

The problem addressed in the paper is the need for efficient load balancing in cloud
computing environments. Specifically, the challenge of distributing tasks among virtual
machines (VMs) to optimize performance, reduce response times, and maximize
throughput is highlighted. The goal is to achieve better resource utilization and overall
system efficiency in a cloud network.

Existing System:
Algorithm Description

Hierarchical load balancing schemes Organize tasks into a hierarchical structure for
efficient distribution across nodes

Honey bee behavior-based load Distribute tasks based on collective decision-making


balancing similar to honey bee behavior.

Dynamic load balancing through heat Adjust task distribution dynamically by simulating
diffusion workload diffusion across nodes.

Improved particle swarm optimization Enhance optimization techniques to allocate tasks


more effectively among nodes.

Tabu search algorithms for resource


management Use heuristic search to optimize resource allocation
decisions while avoiding past choices.

The paper introduces a novel approach called QMPSO(Quantum-behaved Particle


Swarm Optimization), which combines modified Particle Swarm Optimization (MPSO)
with an improved Q-learning(Quality Learning) algorithm for load balancing in cloud
computing. The QMPSO methodology aims to dynamically balance the load among
VMs by minimizing a fitness function that considers factors such as load differences
between hosts, energy consumption, and task distribution across processing units. By
integrating Q-learning with MPSO, the proposed solution enhances convergence rates
and performance metrics for load balancing in cloud environments.

The paper concludes that the QMPSO methodology offers a promising solution for
improving load balancing in cloud computing environments. By leveraging the
hybridization of MPSO and Q-learning, the proposed algorithm demonstrates enhanced
performance in terms of load optimization, response time reduction, and overall system
efficiency. The validation of the algorithm through simulations and real scenarios
confirms its effectiveness in achieving better load distribution and resource utilization in
cloud networks.

Challenges Addressed by QMPSO Methodology:


Problem/Limitation Overcome by Proposed QMPSO Methodology

Inefficient Load Balancing QMPSO combines MPSO and Q-learning for dynamic load
Algorithms balancing, optimizing task distribution among VMs

Lack of Adaptability to Changing QMPSO adjusts velocity based on Q-learning to handle


Workload Patterns dynamic workload changes effectively.

Increased Response Times and QMPSO minimizes response times and task delays
Task Delays through optimized load distribution and task prioritization.

Inefficient Resource Utilization QMPSO enhances resource utilization by balancing tasks


across VMs, reducing idle time, and improving efficiency

Lack of System Performance QMPSO provides performance metrics such as makespan


Metrics reduction, task migration efficiency, and improved
response times.

Limited Optimization Capabilities QMPSO offers a robust optimization approach by


integrating meta-heuristic algorithms for enhanced
convergence rates

Ineffective Task Prioritization QMPSO prioritizes tasks effectively based on load


differences and energy consumption, improving overall
system performance.

Challenges in Real-world QMPSO is validated through simulations and real


Scenarios scenarios, demonstrating its effectiveness and robustness
in practical cloud environments.
Performance Metrics Comparison:
Metric Existing Approach(MPSO & Proposed QMPSO
Q-learning) Methodology

Task Migration Moderate Efficient

Response Time High Reduced

Task Delays Common Minimized

Idle time Substantial Reduced

Makespan Before Prolonged Decreased


Balance

Makespan After Balance Slight Improvement Significant Reduction

Result:

Conclusion:
QMPSO enhances cloud load balancing effectively.It's stable, efficient, and addresses
dynamic environment challenges.Overcomes existing limitations, optimizing resource
allocation.Promising for improving system performance in cloud computing.
Name - Kunjan Bharade
C No. - C22020441610
Roll No. - 4610
BTech IT

Introduction :

● Cloud computing technology is experiencing remarkable growth due to


advancements in communication technology and the widespread use of the
Internet. It allows users to access hardware and software resources over the
Internet. Cloud Service Providers (CSPs) offer services on a rental basis,
managing virtual cloud resources.
● Cloud Computing (CC) finds applications in data storage, analysis, and IoT. Task
scheduling and resource allocation are vital operations in CC, ensuring efficient
resource utilization. CSPs leverage scheduling techniques to manage resources
effectively.
● Load balancing, crucial for system performance, distributes dynamic workloads
evenly among nodes, referred to as Load Balancing as a Service (LBaaS). It is
the mechanism of detecting overloaded and underloaded nodes and then
balancing the load among them. It involves task allocation and VM/task migration
management, optimizing resource utilization.
● Efficient resource scheduling and load balancing are essential for effective file
sharing in cloud environments. Effective task scheduling organizes input
requests, optimizing resource usage and preventing delays.
● CC offers flexible resource utilization through pay-as-you-go models, enabling
users to choose services as needed. CC research focuses on task scheduling
and resource allocation to optimize resource usage and resolve operational
challenges.
● The scheduling or allocation of user requests (tasks) in the cloud environment
poses an NP-hard optimization challenge. Depending on the cloud infrastructure
and user demands, the system may experience varying loads, ranging from
underloaded to overloaded or balanced. Situations such as underloading and
overloading can lead to various system failures related to power consumption,
execution time, and machine failures.
● Hence, load balancing becomes essential to address these issues effectively.
Paper 1 : Load balancing in cloud computing: A big picture

Summary:
The paper addresses multiple load balancing strategies in cloud computing aimed at
optimizing various performance metrics. It introduces a taxonomy for load balancing
algorithms in the cloud and provides a concise overview of performance parameters
studied in existing literature and their impacts. Performance evaluation of
heuristic-based algorithms is conducted through simulations using the CloudSim
simulator, with detailed results presented..

Cloud computing architecture:


The architecture of cloud computing differs from traditional distributed systems in
several aspects:
1. it boasts high scalability,
2. it functions as an abstract entity catering to different service levels for cloud
consumers,
3. it is governed by economies of scale, and 4) it provides dynamically demanded
services through virtualization.
A single host cloud architecture consists of following :

● The hardware layer comprises virtualized hardware resources such as the


processor, main memory, secondary storage, and network bandwidth. Acting as
an intermediary between the guest operating system and virtual machines (VMs),
the Virtual Machine Monitor (VMM) or hypervisor (e.g., Xen, VMWare, UML,
Denali) enables multiple operating systems to run applications on a single
hardware platform concurrently.
● Each guest operating system or VM hosts a variety of heterogeneous
applications, serving as the fundamental unit for executing applications or service
requests.

● In a cloud data center, a finite number of diverse physical hosts are present, each
identified by a host identification number and characterized by processing
elements, processing speed in terms of Million Instructions Per Second (MIPS),
memory size, bandwidth, and other attributes. These hosts accommodate
several VMs, each possessing attributes similar to those of a host.
● Tasks originating from various users are directed to the central load balancer or
serial scheduler for resource mapping within the cloud environment. Each
computing node (VM) executes a single task at a time, with the load balancer
assigning incoming requests to VMs if sufficient resources are available to meet
deadlines.
● Tasks that cannot be immediately executed wait based on Service Level
Agreement (SLA) conditions. Upon task completion, the resources utilized by the
task on a specific VM are released, potentially creating new VMs to handle
additional requests.
● The scheduling model within the cloud data center necessitates load balancing
due to the vast array of heterogeneous input tasks with varying resource
requirements. Input tasks (T1, T2, ..., Tn) are submitted to the cloud system's
task queue, where the VM manager assesses resource availability, active VMs,
and task queue lengths across hosts. If the available active VMs can
accommodate the input tasks, the VM manager assigns the tasks to the task
scheduler. Otherwise, the VM manager creates necessary VMs in hosts with
suitable resource availability.
● The task scheduler functions as a load balancer, orchestrating the mapping of
tasks to VMs based on their resource demands, with each host in the cloud
supporting a finite number of active VMs.

Performance metrics affecting Load Balancing :

Sr. Metrices Definition


No.

1 Throughput (TP) Throughput indicates the number of user requests (tasks)


executed per unit time by a virtual machine.
2 Thrashing (TH) Thrashing will occur due to memory, or other resources
have
become exhausted or too limited in the system to
perform operations on user requests. In the cloud
environment, it occurs when the number of VM is
spending their time in migration without maintaining the
proper scheduling.

3 Reliability (R) Reliability will consistently perform according to the


system specifications. In case of any failure during task
execution, the task is transferred to any other resources
(VMs) to improve reliability of the system. The reliable
system improves the stability of the system.

4 Accuracy (A) Accuracy is the ability of a measurement that can match


with the actual value of the task execution being
measured.

5 Predictability (PR) It is the degree used for the prediction of task allocation,
task execution, and task completion according to the
available cloud resources (virtual machines).

6 Makespan (MS) It is the total time required to complete all tasks


submitted to the system. Makespan of the system is the
maximum time taken by the host running over the data
center.

7 Scalability(S) It means the level of surviving for a balanced system


when the number or size of task or workload is
increased.

8 Fault Tolerance The fault-tolerant method is one of the capabilities of the


(FT) system to perform uninterrupted service even if one or
more system elements failing.

9 Associated It is the overhead formed by the execution of the


Overhead (AO) algorithms.

10 Migration time The actual time required to migrate a task or VM from


(MT) one resource
to another. The migration of tasks may be from one VM
to another VM within a single host or different host.
11 Response time It is the time required by the system to respond to a task.
(RT) In other words, it is the sum of transmission time, waiting
time, and service time.

12 Associated Cost This cost depends on the percentage of resource


(AC) utilization.

13 Energy The energy consumption of a cloud system is the amount


Consumption of energy absorbed by all ICT devices connected in the
(EC) system.

Classification of load balancing algorithm:


There are two balancing strategies for load balancing

Static Strategy Dynamic Strategy

The static strategy acts on VMs without In allocation policy where the current load
any load information. information of VMs are available before
the allocation is said to be a dynamic
strategy.

The static-based balancing algorithms are Dynamic-based balancing algorithms are


mostly fit for stable environments with more adaptable and effective in both
homogeneous systems. homogeneous and heterogeneous
environments.

Less system overhead. More system overhead.

Typically, the static load in cloud The load is distributed among physical
computing strategies are coming under machines during the run-time. Here, the
two assumptions. The first is the initial arrival time of tasks is unusual, and the
task arrival and the second is the creation of virtual machines is also
availability of physical machines at the according to the type of input tasks.
beginning. The resource update will be Can be classified into two categories:
carried out after each task is scheduled. Off-line mode (Batch mode) (task is
allocated only at some predefined
moments) and On-line mode ( a user
request (task) is mapped onto a
computing node as soon as it enters at
the scheduler).

Some presented heuristics in static Some presented heuristics for online


strategy are OLB, MET, MCT, GA, modes are OLB, MET, MCT, SA.
Switching Algorithm, TABU, A algorithm,
Min-Min, Min–Max.

Various Load Balancing algorithms are as follows :

Sr. Algorithms Description


No.

1 OLB (Opportunistic OLB heuristic technique is used through both static and
load balancing) dynamic(Online mode) strategy in a cloud environment.
This heuristic always allocates tasks to virtual machines
arbitrarily and then checks for the next available
machine.

2 MET (Minimum MET is also known as LBA (Limited Best Assignment) or


Execution Time) UDA (User Directed Assignment). This heuristic
technique is used in both static and dynamic (Online
mode) strategy. This algorithm maps each task to the
virtual machine. The scheduler assigns each task
according to lowest execution time as in Expected Time
to Compute (ETC) matrix to the VM so that the system
performs all tasks with the execution time.
3 MCT (Minimum MCT heuristic technique is used in both static and
Compilation Time) dynamic (Online mode) load balancing strategies. It
allocates the task to the core that has the least
completion time.

4 Min–Min The basic Min-Min procedure in a cloud environment


selects the task with least size and chooses a cloud
resource (VM) that has the minimum capacity. After
allocation of a task to a VM, that task is removed from
the queue and proceeds forward for the distribution of all
unallocated tasks.

5 Min–Max The basic Max–Min procedure in a cloud environment


selects the task with larger size and chooses a cloud
resource (VM) that has the minimum processing
capacity. After allocation of a task to a VM, that task is
removed from the queue and proceeds forward for the
distribution of all unallocated tasks.

6 Genetic Algorithm GA is based on the population and individual


(GA) chromosome (Possible allocation) which may have
some fitness values (like energy consumption,
makespan, throughput, etc.) for optimizing them. They
encoded the population with binary strings, and the
chromosomes experienced a random single point
crossover, and considered 0.05 as mutation probability.

7 Simulated SA is a method for resolving unconstrained and


annealing (SA) bound-constrained optimization problems. At each
iteration of the algorithm, a new point is generated
based on a probability distribution.

8 Tabu Search (TS) TS is a meta-heuristic based local heuristic to explore


the solution space ahead of local optimality. This method
uses adaptive memory that performs a more elastic
search behavior.

9 A-star Search A-star search algorithm is extensively applied as a


graphic searching algorithm. This heuristic algorithm
combines the benefits of both depth-first search and
breadth-first search algorithm. It supports two lists, the
first list act as a priority queue of the tasks and the
second list has the processing capacity of all VMs.

10 Switching This algorithm is used in the cloud environment for the


Algorithm migration of tasks or VMs. Using this method, we can
achieve the fault tolerant property.

Simulation results:
● The authors have analyzed the load balancing algorithms (MCT, MET, Min-Min,
Max–Min, and Min–Max) through simulation with generated datasets.
● The experiments were performed using CloudSim-3.0.3 simulator. The version of
the system is Intel Core i7 4th Generation processor, 3.4 GHz CPU and 8 GB
RAM running on Microsoft Windows 8 platform. The arrival rate of the task
follows the Pareto distribution.
● Here, to analyze the algorithms, the authors have considered makespan and
energy consumption of the system as performance metrics.
● The authors have conducted two sets of simulation scenarios as follows.
❖ Scenario-1: For this scenario, the total number of tasks is 500 which is
fixed. The number of VMs varies from 20 to 200 in intervals of 20.
❖ Scenario-2: For this scenario, the total number of VMs is 100 which is
fixed. The number of input tasks varies from 100 to 1000 in intervals of
100.
● A comparative report is shown in Figs. 4 and 5. The makespan and energy
consumption minimum for the MCT load balancing algorithm among the
compared five algorithms.
● A comparative report is shown in Figs. 6 and 7. The makespan and energy
consumption minimum for the MCT load balancing algorithm among the
compared five algorithms.

● Here, in both the scenarios, the Max–Min load balancing algorithm not performed
better as compared to the MCT, MET, Min-Min, and Min–Max algorithms.

Conclusion :
The paper discusses the importance of load balancing in cloud computing to enhance
system stability and performance. It categorizes load balancing algorithms into static
and dynamic approaches, highlighting their impact on system efficiency. Key
performance metrics such as makespan and energy consumption are identified as
crucial indicators of load balancing effectiveness. The study emphasizes the
significance of task allocation and execution in optimizing resource utilization. Overall,
the paper provides valuable insights into load balancing strategies in cloud
environments and suggests areas for future research and improvement in load
balancing algorithms.
Paper 2 : Resource scheduling algorithm with load balancing for cloud service
provisioning

Summary :
The paper addresses the challenges of resource scheduling, response time
optimization, and load balancing in cloud data centers by proposing a novel fuzzy-based
approach and multidimensional queuing network model for efficient resource allocation
and improved performance.

Existing Approaches Discussed in the Paper :

● Scalable Traffic Management (STM) : This approach focuses on reducing the


maximum link load to ensure load balancing between users in the network.
However, it may not be suitable for multidimensional resource scheduling.
● Scalable Workload-Driven Partitioning (SWDP) scheme : This scheme aims to
improve response time and throughput for distributed transactions by
partitioning workloads. It addresses load balancing but may have limitations in
achieving high success rates.
● Task Scheduling using Honey Bee Behavior : This approach utilizes honey bee
behavior to optimize machine utilization and balance load across virtual
machines in a cloud environment. It focuses on task scheduling efficiency.

Solution Proposed in the Paper :


The paper proposes a new method called Fuzzy-based Multidimensional Resource
Scheduling and Queuing Network (F-MRSQN) for efficient resource scheduling and load
balancing in cloud data centers. The key features of the proposed F-MRSQN method
include:

1. Utilization of fuzzy logic for multidimensional resource scheduling: The


method incorporates fuzzy logic techniques to optimize resource allocation
based on CPU, memory, and bandwidth requirements from different cloud service
providers to cloud users [T4].

2. Multidimensional Queuing Load Optimization (MQLO) algorithm: The method


employs the MQLO algorithm to balance loads on scheduled resources, thereby
reducing response time and enhancing the average success rate for cloud user
requests [T2], [T3].
3. Integration of Fuzzy-based Multidimensional Resource Scheduling: The
method integrates fuzzy-based resource scheduling with queuing network
models to improve scheduling efficiency and response time in cloud
environments [T1].

4. Efficient load balancing and resource utilization: The F-MRSQN method aims to
achieve maximum resource utilization and minimum processing time by
optimizing resource allocation and balancing loads across cloud servers [T4].

Overall, the proposed F-MRSQN method introduces a comprehensive approach to


resource scheduling and load balancing in cloud data centers, leveraging fuzzy logic
and queuing network optimization techniques to enhance performance metrics and
improve cloud service provisioning [T1], [T2], [T3].

Fuzzy-based multidimensional resource scheduling :


● The F-MRSQN method begins by defining input and output variables. These
consist of three linguistic variables: "In", "Bandwidth", "Memory", and "CPU". The
output variable, denoted as "ρ", signifies efficient resource scheduling.
In → {BW, Mem, CPU}; BW → {L, M, H}; Mem → {S, M, La}; CPU → {L, M, H}
● After identifying the input linguistic variables and output variables, the
subsequent stage in the F-MRSQN method involves applying a trapezoidal
fuzzification function to each input linguistic variable to represent an associated
observation. Through fuzzification, a real scalar value is transformed into a fuzzy
value. The trapezoidal fuzzification function utilized in the F-MRSQN method is
expressed as follows: [fd:{BW, Mem, CPU} → R]
● ‘R’ denotes the fuzzy sets with the trapezoidal fuzzification function ‘fd’ for the
multidimensional resources ‘BW, Mem, CPU’ to be scheduled in cloud
environment. With the resultant trapezoidal fuzzification, the F-MRSQN method
measures the fuzzy inferences using fuzzy square.
● In the proposed method, with the input being ‘In’ and output being ‘ρ’, the fuzzy
inference using fuzzy square is expressed as given below. If In = A then ρ = B
● In the final stage of the F-MRSQN method, defuzzification is carried out to
determine a single number that aligns with the membership function, thereby
generating the output. The F-MRSQN method employs a centroid method to
convert the output values obtained from fuzzy inferences, which are expressed
as fuzzy sets. This process is outlined as follows: [insert description of the
centroid method used in the F-MRSQN method].
Fuzzy set (y) = (µA (y) ydx )/ (µA (y) dx)
● As outlined in Algorithm 1, the Fuzzy-based MRS algorithm begins by gathering
input parameters, including the number of cloud users requiring resource
allocation, available cloud servers, and the input linguistic variables such as
bandwidth, memory, and CPU. In step 2, the resource manager analyzes the
resource requirements of each cloud user to optimize resource scheduling in the
cloud environment efficiently. Step 3 involves obtaining different linguistic input
variables from the cloud users, followed by fuzzification. Subsequently, fuzzy
square inference is employed, leading to defuzzification, which assesses
resource scheduling efficiency and data center utilization.

. Multidimensional queuing network :


● The MQN model of a cloud server consists of ‘R’ resources and ‘C’ different
classes of cloud user requests. In a MQN model, let ‘CSS’ be an aggregate cloud
server state which describes the cloud user requests from each class for each
resource. Then the cloud server state, ‘CSS = (z1, z2, . . . , zn)’ where each ‘zR (R =
3 in our proposed method)’ is represented by a vector ‘zR = (nR1, nR2, . . . , nRm)’,
with ‘nRm’ being the total number of cloud users requests in class ‘m’ at resource
‘R’. Therefore the resource utilization rate ‘UtilR (t)’ is the product of total
demands of one class of user requests at time interval ‘λRj(t)’and the demand
‘DemRs’ that is expressed as : UtilR (t) = DemRm ∗ λRm(t)
● The maximum utilization of each resource must be less than Resource Threshold
Factor ‘RTF ’, Max(DemRm ∗ λRm(t)) ≤ RTF
● the average latency of a cloud server for every class of user request is computed
as follows: LTR = LTRm (t)
= Summation from j=1 to n ( DRj / (1 − UtilR(t)))
● ‘LTR’, represent the average latency time of class ‘m’, with resources being ‘R’.
Also the average latency time for all classes of user requests in cloud
infrastructure is evaluated as follows: LT = (LTR (t) ∗ λR (t)) / λR (t)
● As depicted in Algorithm 2 above, the MQLO algorithm evaluates the resource
utilization rate for each cloud user to determine if it meets their requirements.
Subsequently, it calculates the average processing time of both individual cloud
servers and all servers in the data center. Based on these evaluations, resource
allocation is carried out to optimize the load efficiently.

Result :
● Average Success Rate (ASR) is the ratio of number of users’ requests addressed
by virtual machine grid through the resource manager at a particular time in
cloud environment.
● F-MRSQN method increases average success rate by 6% when compared to
existing STM and 12% when compared to SWDP respectively.
● Resource scheduling efficiency is defined as the ratio of the number of resources
scheduled based on user request to the total number of resources in the cloud.
● It improves the resource scheduling efficiency by 6% compared to STM and 7%
compared to SWDP respectively.
● Response time is defined as time utilized to respond by a MQLO algorithm in a
distributed manner while scheduling the resources in the cloud.
● Response time is reduced in proposed F-MRSQN method by 26% when compared
to existing STM and 45% when compared to SWDP.

Conclusion :
Fuzzy-based Multidimensional Resource Scheduling and Queuing Network (F-MRSQN)
method is proposed for efficient scheduling of resources and optimizing the load for
each cloud user requests with the efficient evolution of data centers. Resource
scheduling efficiency is improved by performing the Fuzzy-based Multidimensional
Resource Scheduling in the proposed F-MRSQN method. Followed by this with the
application of multidimensional queuing network, efficient balancing of loads on
scheduled resources is said to be achieved, therefore enhancing the average success
rate for each cloud user requests.The results show that F-MRSQN method provide
better performance with an improvement of average success ratio by 9% and reduce the
response time by 20% compared to existing methods.
Paper 3 : Hybridization of firefly and Improved Multi-Objective Particle Swarm
Optimization algorithm for energy efficient load balancing in Cloud Computing
environments

Summary :
The paper addresses the optimization of load balancing in Cloud Computing
environments through the proposal of a new hybrid algorithm called FIMPSO (a
combination of Firefly algorithm and Improved Multi-Objective Particle Swarm
Optimization technique). The FIMPSO algorithm aims to distribute resources effectively
among computers, networks, or servers to manage workload demands and application
demands in a cloud environment.

Existing Approaches Discussed in the Paper :


● Particle Swarm Optimization (PSO) Algorithm: Used for scheduling cloudlets
based on Virtual Machines (VMs) in CloudSim to optimize resource allocation
and task scheduling .
● Gravitational Search Algorithm (GSA): An optimization technique based on
Newton's law of gravity, which uses attractive and repulsive forces to resolve
optimization issues related to load balancing and resource allocation in Cloud
Computing environments .
● Hybrid GSA using Orthogonal Crossover: Developed as a scheduling technique
in Cloud Computing environments to optimize load balancing and resource
allocation .
● Cloud Scientific Workflow Scheduling Algorithm (CLOSURE): Proposed in the
literature based on an attack-defense game model to optimize task scheduling
and resource allocation in cloud environments

Solution Proposed in the Paper :


● A new method called FIMPSO (Firefly and Improved Multi-Objective Particle
Swarm Optimization) algorithm is proposed as a novel approach to optimize load
balancing in Cloud Computing environments. The FIMPSO algorithm is a
hybridization of the Firefly algorithm and the Improved Multi-Objective Particle
Swarm Optimization technique .
● The FIMPSO algorithm aims to enhance the efficiency of load scheduling and
resource allocation in Cloud Computing by combining the strengths of the Firefly
algorithm and the Improved Multi-Objective Particle Swarm Optimization
technique. By leveraging the Firefly algorithm to minimize the search space and
the IMPSO technique to identify enhanced responses, the proposed FIMPSO
algorithm optimizes the allocation of resources and improves performance
metrics in cloud environments

FF algorithm :
Generally, the Firefly Algorithm (FF) is categorized into three premises:

1. All fireflies belong to the same gender, and attraction only occurs between
fireflies of opposite sexes.
2. In fireflies, attraction is proportional to brightness, meaning a lower-brightness
firefly will be attracted to a brighter one. Consequently, as the distance between
fireflies increases, both attraction and brightness diminish. If a firefly is not
brighter than a specific individual, it moves randomly to avoid attraction.
3. An objective function is utilized to calculate the brightness of fireflies.
The core principles of the FF algorithm revolve around light intensity and
attraction. Fireflies' attraction is quantified through their intensity, while
brightness is determined through an objective function during issue optimization.
IMPSO algorithm :
Here are the steps involved in the IMPSO algorithm summarized in points:

1. Initialization of PSO:
- Determine the swarm set number (J) and particle dimension (D).
- Set computation range for variable values (U(l)mn and U(l)mx).
- Control particle speed and initialize swarm position and speed randomly.
2. Parameter evolution:
- Set maximum iterations (zmx).
- Define higher and lower inertia weights (ωmn = 0.4, ωmx = 0.9).
- Establish training factors (t1 = t2 = g = 2).
3. Evaluation of objective function:
- Compute objective function values for each particle.
- Perform particle generalization.
4. Particle position initialization:
- Set personal best position (pbest(l)) and global best position (gbest(l)).
5. Archive initialization:
- Save non-dominated solutions into the archive.
6. Iterative process:
- If maximum iterations are not reached:
(a) Explore gbest(l) from the archive.
(b) Update particle position and speed.
(c) Perform mutation.
(d) Update archive.
(e) Upgrade personal best solutions.
(f) Update iteration value.
7. Repeat cycle until iteration requirements are met.
Results :
The FIMPSO algorithm produced favorable results, achieving an average response time
of 13.58 ms, the highest CPU utilization of 98%, and the highest memory utilization of
93% under extra-large tasks. Additionally, the proposed method demonstrated a
maximum reliability of 67% and a make span of 148, along with the highest average
throughput of 72% under extra-large tasks.

Hence, it can be deduced that the FIMPSO algorithm effectively maintains average
throughput across all task sizes and exhibits a decrease in average throughput as the
number of tasks increases.
Conclusion :
The simulation outcome showed that the proposed FIMPSO model excelled in its
performance over the compared methods. From the simulation outcome, it is
understood that the FIMPSO algorithm achieved effective results with the least average
response time of 13.58 ms, maximum CPU utilization of 98%, memory utilization of 93%,
reliability of 67% and throughput of 72% along with a make span of 148, which was
superior to all other compared method.
Name -Ashlesha Ahirwadi
C No-C22020441608
Roll No-4608
BTech.IT

Paper 1 - An improved Hybrid Fuzzy-Ant Colony Algorithm Applied to


Load Balancing in Cloud Computing Environment

Introduction :

The paper introduces an innovative Hybrid Fuzzy-Ant Colony Algorithm designed to


improve load balancing in Cloud Computing environments. By combining Fuzzy logic
and ant colony optimization concepts, the algorithm aims to address the challenges
posed by a large number of requests and servers in the Cloud. The focus is on
optimizing load balancing and response time objectives while considering the
performance dependency on ACO parameters. The study utilizes Taguchi experimental
design to identify optimal parameter values and a fuzzy module to enhance pheromone
value evaluation for efficient calculation duration.

Existing Solution :

The existing solution presented in the paper involves the use of conventional load
balancing algorithms in Cloud environments. However, due to the high volume of
requests and servers available at any given time, these traditional algorithms may not
be as effective. The ones mentioned in the paper are Genetic Algorithm and Simulated
Annealing.

Genetic Algorithm (GA):


Genetic algorithms are optimization techniques inspired by the process of natural
selection and genetics. In a genetic algorithm, a population of potential solutions
evolves over generations through processes such as selection, crossover, and mutation
to find the optimal solution to a problem. The algorithm starts with a population of
candidate solutions represented as chromosomes, which are evaluated based on a
fitness function. Through selection, individuals with higher fitness are more likely to be
chosen for reproduction.

Simulated Annealing:
Simulated annealing is a probabilistic optimization technique inspired by the annealing
process in metallurgy. The algorithm mimics the annealing process where a material is
heated and then slowly cooled to reach a low-energy state. In simulated annealing, a
system starts at a high temperature where random moves are accepted even if they
increase the objective function value. As the temperature decreases over time, the
algorithm becomes more selective, accepting only moves that decrease the objective
function value. This balance between exploration and exploitation allows simulated
annealing to escape local optima and converge towards a global optimum.

Both genetic algorithms and simulated annealing are popular metaheuristic


optimization techniques used to solve complex optimization problems in various
domains.

Proposed solution:

The proposed solution in the paper is a Hybrid Fuzzy-Ant Colony Algorithm designed to
enhance load balancing in Cloud Computing environments. This novel approach
combines Fuzzy logic and ant colony optimization to address the challenges of
managing a large number of virtual machines and servers in the Cloud. The algorithm
aims to optimize load balancing, response time, and processing time by introducing a
Fuzzy module for pheromone calculation and improving the parameters of the ant
colony optimization algorithm. By adapting the pheromone formula to the context of
Cloud computing and conducting simulations using the CloudAnalyst platform, the
proposed algorithm demonstrates its effectiveness compared to traditional algorithms.
The study focuses on optimizing multiple objectives while leveraging the strengths of
Fuzzy logic and ant colony optimization to achieve efficient load balancing in Cloud
environments.

Results and observations :

The comparison of results between the Hybrid Fuzzy-Ant Colony Algorithm (ACO) and
the Round Robin algorithm in the paper shows significant improvements in performance
metrics, particularly in response time and processing time. The results indicate that the
ACO algorithm outperforms the Round Robin algorithm in terms of response time and
processing time across different scenarios. Here is a summary of the performance
improvements achieved by the ACO algorithm over Round Robin:

- Scenario S1: ACO reduced response time by 12.39% and processing time by 28.8%
compared to Round Robin.
- Scenario S2: ACO reduced response time by 9.27% and processing time by 87.9%
compared to Round Robin.
- Scenario S3: ACO reduced response time by 82.93% and processing time by 90.5%
compared to Round Robin.
- Scenario S4: ACO reduced response time by 9.55% and processing time by 35.9%
compared to Round Robin.
- Scenario S5: ACO reduced response time by 37.62% and processing time by 53.2%
compared to Round Robin.
These results demonstrate the superior performance of the Hybrid Fuzzy-Ant Colony
Algorithm over the Round Robin algorithm in optimizing load balancing and achieving
significant improvements in response time and processing time in Cloud Computing
environments.

Conclusion :

The Hybrid Fuzzy-Ant Colony Algorithm proposed in the study offers a novel approach
to enhancing load balancing in Cloud Computing environments. By combining Fuzzy
logic and ant colony optimization, the algorithm demonstrates superior performance in
optimizing response time and processing time compared to traditional algorithms like
Round Robin. The results highlight the effectiveness of the Hybrid Fuzzy-Ant Colony
Algorithm in improving load balancing efficiency without compromising cost objectives.
Overall, the study showcases the potential of this algorithm to significantly enhance
performance metrics in Cloud environments, making it a promising solution for
optimizing load balancing strategies.
Paper 2 - An Secure and Optimized Load Balancing for Multi-tier IoT
and Edge-Cloud Computing Systems

Introduction :

This paper focuses on the increasing demand for efficient and secure computation
offloading techniques in Mobile-edge computing (MEC) systems. The study addresses
the challenges faced by Mobile Device Users (MDUs) in executing resource-intensive
tasks by proposing a novel load balancing algorithm. Additionally, a new security layer
based on electrocardiogram (ECG) signal encryption is introduced to enhance data
security during transmission. The integration of load balancing, computation
offloading, and security measures aims to optimize system performance and energy
consumption in multitier IoT and edge-cloud computing environments.

Existing solution :

Existing solutions in the field of Mobile-edge computing (MEC) have primarily


focused on load balancing algorithms and computation offloading techniques to
optimize system performance and resource utilization. However, these solutions
often face challenges such as high latency, security vulnerabilities, and inefficient
task allocation among small base stations (sBSs)

Proposed solution :

The proposed solution in the IEEE Internet of Things Journal article introduces a
comprehensive approach that combines load balancing and computation offloading
(CO) for multitier Mobile-edge cloud computing systems. The key components of
the proposed solution are outlined below:

1. Load Balancing Algorithm:


- The study presents a load balancing algorithm designed to redistribute Mobile Device
Users (MDUs) among small base stations (sBSs) in MEC systems. This algorithm
considers factors such as the location of sBSs, the current number of users at each
sBS, the number of CPU cycles required for each task, and the uplink data rate.
- By dynamically assigning MDUs to the best available sBS based on these factors, the
algorithm aims to optimize system performance, reduce latency, and enhance task
completion efficiency within latency thresholds.
2. Security Layer:
- A new security layer is introduced in the proposed solution to address data security
concerns during transmission in MEC systems. This security layer incorporates an
advanced encryption standard (AES) cryptographic technique combined with
encryption and decryption keys derived from electrocardiogram (ECG) signals.
- By utilizing ECG signals for key generation, the system enhances data security and
protects against potential security threats, ensuring the confidentiality and integrity
of transmitted information.

3. Optimization Problem Formulation:


- The study formulates an optimization problem that combines load balancing and CO
(LBCO) for multiuser, multitask, multitier MEC systems. The objective of this
optimization problem is to overcome resource constraints and reduce system
overhead, ultimately improving energy efficiency and processing time.
- The optimization problem aims to minimize the consumed energy and processing
time of the entire system by optimizing the allocation of tasks among MDUs and
sBSs while considering latency thresholds and system constraints.

By integrating the load balancing algorithm, security layer, and optimization problem
formulation, the proposed solution offers a comprehensive framework for enhancing
the performance, efficiency, and security of multi-tier IoT and edge-cloud computing
systems in the context of Mobile-edge computing (MEC) environments.

Results and Observations :

The results and observations of the study presented in the IEEE Internet of Things
Journal article highlight the performance improvements and benefits of the proposed
load balancing and computation offloading (CO) technique with the integrated security
layer for multitier Mobile-edge cloud computing systems. The key findings and
observations are detailed below:

1. System Overhead Reduction:


- The simulation results demonstrate that the proposed model, with and without the
additional security mechanism, offers significant savings in system overhead
consumption compared to existing models. Specifically, the proposed model shows
savings of about 68.2%, 72.4%, 13.9%, and 20.1% for system overhead consumption
relative to the LE and CO model in a previous study.
2. Load Balancing Efficiency:
- The proposed load balancing algorithm effectively redistributes Mobile Device Users
(MDUs) among small base stations (sBSs) based on computation and communication
resources. This efficient redistribution of MDUs helps balance the load between sBSs
and reduces overall communication costs, leading to improved system performance
and resource utilization.

3. Security Enhancement:
- The introduction of the new security layer, which incorporates AES cryptographic
strategy with encryption and decryption keys derived from ECG signals, enhances data
security during different stages of data transmission. By infusing ECG signal
parameters into the encryption process, the system ensures secure communication
and protects sensitive information from potential security threats.

4. Energy and Temporal Efficiency:


- The integration of load balancing, CO, and security models as a binary linear problem
aims to decrease system cost in terms of energy consumption and temporal demands.
The proposed low-complexity secure load balancing and CO algorithm provides an
optimal solution for task offloading decisions, leading to improved energy efficiency and
reduced processing time .

5. Performance Comparison:
- The simulation-based experiments demonstrate that the proposed load balancing
and CO algorithm, with or without the additional security layer, significantly reduces
system consumption by about 68.2% to 72.4% compared to local execution (LE).
This highlights the effectiveness of the proposed approach in optimizing system
performance and resource utilization in multitier Mobile-edge cloud computing
systems.
Conclusion :
Overall, the results and observations of the study emphasize the effectiveness of the
proposed solution in enhancing system efficiency, load balancing, security, and
resource allocation in multitier IoT and edge-cloud computing environments, ultimately
leading to improved performance and reduced system overhead.
Paper 3 - A Novel Weight-Assignment Load Balancing Algorithm for Cloud
Application

Introduction :

The research study presented introduces a novel weight-assignment load balancing


algorithm designed to optimize workload distribution in cloud-based three-tier web
applications. By dynamically assigning weights to server virtual machines based on key
metrics like thread count, CPU usage, RAM utilization, network buffers, and network
bandwidth, the algorithm aims to minimize communication overhead and enhance
system performance. Through experimental validation in a private cloud environment,
the algorithm demonstrates improved response times and resilience to workload
fluctuations, offering a promising solution for efficient load balancing in cloud
computing environments.

The problem addressed in the research is the inefficiency of common load balancing
and auto-scaling strategies in combating flash crowds and resource failures in
cloud-deployed applications. Despite the use of these strategies by cloud providers,
performance degradation still occurs due to the limitations of existing load balancers
in adapting to dynamic workload changes caused by flash crowds and resource
failures

Existing Solution :

Key aspects of the existing solutions include:


1. Simpler Load Balancing Techniques: Traditional load balancing algorithms, such
as round-robin, distribute incoming requests evenly across server virtual machines
without considering real-time server metrics or workload variations.
2. Limited Optimization: Existing solutions may not effectively optimize workload
distribution based on server utilization metrics, potentially leading to suboptimal
resource allocation and performance degradation under fluctuating workloads.
3. Performance Comparison: The research study evaluates the proposed novel
weight-assignment load balancing algorithm against these existing solutions to
demonstrate its superiority in terms of response times, resource utilization, and
system efficiency.
Proposed Solution :

The proposed solution in the research study is a novel weight-assignment load


balancing algorithm tailored for cloud-based three-tier web applications. This algorithm
combines dynamic load balancing techniques with carefully selected server metrics,
including thread count, CPU usage, RAM utilization, network buffers, and network
bandwidth. By analyzing these metrics, the algorithm calculates the weight of each
server virtual machine in real-time, enabling optimized workload distribution.

The key features of the proposed solution include:


1. Utilization of Key Server Metrics: By incorporating specific server metrics beyond
traditional measures like CPU and memory utilization, such as thread count and
network bandwidth, the algorithm provides a comprehensive view of server workload.

2. Dynamic Weight Assignment: The algorithm dynamically assigns weights to


server virtual machines based on the analyzed metrics, allowing for adaptive
workload distribution in response to changing system conditions.

3. Performance Improvement: Through experimental evaluation in a private cloud


environment, the proposed algorithm demonstrates enhanced system
performance, reduced communication overhead, and improved resilience to flash
crowds and resource failures.

4. Architecture Compatibility: The algorithm follows a monitor-analyze-plan-execute


loop architecture commonly used in cloud-based systems, ensuring continuous
optimization of workload distribution to meet service level agreement (SLA)
requirements.
Result and Observation :

Results and observations from the study include:


1. Improved Response Times: The proposed algorithm demonstrates a significant
improvement in overall average response times compared to baseline algorithms. In
resource failure situations, the novel algorithm enhances response times by 20.7%
and 21.4% compared to the baseline algorithm and round-robin algorithm,
respectively.

2. Enhanced Resource Utilization: By dynamically assigning weights to server virtual


machines based on key metrics like thread count, CPU usage, RAM utilization, and
network bandwidth, the proposed algorithm optimizes resource utilization and
workload distribution. This leads to improved system performance and reduced
communication overhead.

3. Resilience to Workload Variations: The novel algorithm shows resilience to flash


crowds and resource failures, maintaining system efficiency and performance
under fluctuating workloads. This adaptability is crucial for cloud-based
applications that experience dynamic changes in user demand.

4. Realistic Experimental Environment: The study's evaluation in a private cloud


environment provides realistic insights into the behavior of resources and system
performance, offering practical implications for cloud software developers and
organizations looking to optimize load balancing in their applications.
5. Validation of Proposed Algorithm: Through extensive experiments and
performance comparisons, the research validates the effectiveness of the proposed
algorithm in
improving response times, scalability, and system reliability in cloud-based
three-tier web applications.

Conclusion :

Overall, the results and observations highlight the efficacy of the novel
weight-assignment load balancing algorithm in enhancing system performance,
reducing communication overhead, and adapting to changing workload conditions in
cloud environments.

Overall Conclusion

There is an utmost need to optimize the present load balancing techniques in order for
providing QoS to the clients using cloud services. The classical solutions/ algorithms do
not suffice real life scenarios therefore use of hybrid algorithms and optimization
techniques are necessary. These techniques need to take care of response time, task
migration, resource utilization, makespan and security as well.

You might also like