0% found this document useful (0 votes)
19 views

Toan Energy-Aware Task Offloading and Load Balancing For Latency-Sensitive IoT Applications in The Fog-Cloud Continuum

Uploaded by

cm23csr1p08
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Toan Energy-Aware Task Offloading and Load Balancing For Latency-Sensitive IoT Applications in The Fog-Cloud Continuum

Uploaded by

cm23csr1p08
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

This article has been accepted for publication in IEEE Access.

This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2022.Doi Number

An energy-aware task offloading and load


balancing for latency-sensitive IoT applications
in the Fog-Cloud continuum
Abhijeet Mahapatra1, Santosh K. Majhi2, Senior Member, IEEE, Kaushik Mishra3, Rosy
Pradhan4, D. Chandrasekhar Rao5, and Sandeep K. Panda6, Senior Member, IEEE
1
Department of Computer Science & Engineering, Veer Surendra Sai University of Technology, Burla, Odisha 768018 INDIA
2
Department of Computer Science & Information Technology, Guru Ghasidas Viswavidyalaya, Bilaspur, Chhattisgarh 495009 INDIA
3
Department of Computer Science & Engineering, GITAM (Deemed to be University), Visakhapatnam, Andhra Pradesh 530045 INDIA
4
Department of Electrical Engineering, Veer Surendra Sai University of Technology, Burla, Odisha 768018 INDIA
5
Department of Information Technology, Veer Surendra Sai University of Technology, Burla, Odisha 768018 INDIA
6
Department of Data Science and Artificial Intelligence, Faculty of Science & Technology, ICFAI Foundation for Higher Education, Hyderabad,
Telangana 413007 INDIA
e-mail: [email protected], [email protected], [email protected], [email protected],

[email protected], [email protected]

Corresponding author: Santosh K. Majhi, Sandeep K. Panda (e-mail: [email protected], [email protected])

ABSTRACT With the voluminous information being produced by the Internet of Things (IoT) smart
gadgets, the consumers with their countless service requests are also growing rapidly. As there is a huge
distance between the IoT devices and the Cloud datacenter, some latency is incurred in the communication
between the IoT devices and the Cloud datacenter. This latency can be reduced by introducing a Fog layer in
between the Cloud and the IoT layer and therefore, it is paramount to offload those tremendous data to
leverage the overloaded storage and computation to the Cloud-based systems and Fog-assisted nodes.
Moreover, these heavy computations consume significant energy from the distributed Fog servers as well as
Cloud datacenters. Therefore, this work addresses the task migration problem in a Fog-Cloud system and
load balancing to reduce the latency rate, energy utilized and service time while increasing the resource
utilization for latency-sensitive systems. This paper uses a Fuzzy logic algorithm for determining the target
layers for offloading considering the resource heterogeneity and the system requirements (i.e., network
bandwidth, task size, resource utilization and latency sensitivity). A Binary Linear-Weight JAYA
(BLWJAYA) task scheduling algorithm has been proposed to map the incoming IoT requests to computation-
rich Fog nodes/virtual machines (VMs). Numerous experimental simulations have been carried out to
appraise the efficacy of the suggested method and it is evident that the suggested method outperforms other
baselines with an approximate improvement of 26.2%, 12%, 7%, 8.63% and 6% for Resource utilization,
Service rate, Latency rate, Energy consumption and Load balancing rate. The presented approach is generic
and scalable concerning addressing the unpredictability of data and the associated latency due to the task
offloading criteria within the Fog layer.

INDEX TERMS Energy consumption, Fog-Cloud Computing, IoT, Latency sensitivity, Load balancing,
Resource utilization, Scheduling, Task offloading.

I. INTRODUCTION devices, make them undesirable for processing


Over the decades, IoT has gained significant popularity in unpredictable and voluminous data on local devices. With
the telecommunication sector. A tremendous amount of the ever-increasing IoT devices, end-users and their
data is produced regularly from the wide deployment of IoT demands are also correspondingly growing prolifically.
devices through sensors which then needs processing on the To alleviate the aforementioned critical issues, Cloud
computation nodes within a determined time. The computing appeared as a promising technology that offers on-
involvement of numerous constraints, such as computing- demand resource provisioning from a large pool of virtualized
limited capabilities, bandwidth, limited memory capacity, resources which is based on a pay-per-use pricing policy [1].
and limited power and networking capabilities of IoT

VOLUME XX, 2017 1

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

Notwithstanding these advantages, the growing requirements tasks, considering energy usage, execution time, and available
for resources by IoT applications bring down the system's resources. Reinforcement learning models, like the one
performance and incur high energy consumption due to employed in HunterPlus, can be difficult to interpret. This
massive data transmission and tasks or server migrations [2]. makes it challenging to understand the rationale behind their
Hence, the network delay caused by processing the bulk data scheduling decisions and troubleshoot potential issues.
is a major concern for real-time interaction, low-latency and
high QoS parameters in Cloud computing [3]. Fog computing B. MOTIVATION
is envisioned as a prominent solution to address the In the context of the IoT-Fog-Cloud continuum, task
bottlenecks of Cloud computing, which enables IoT devices offloading means migrating the tasks from the resource-
by expanding the computational as well as the storage limited IoT gadgets to the computationally powered servers in
capabilities at the network’s edge considering the latency gaps Fog or Cloud environments. Moreover, this paradigm
[4, 5]. Nevertheless, maintaining the evenly dispersing of leverages the shortcomings of IoT gadgets in terms of limited
workloads among nodes, keeping the Fog-Cloud computing resource capabilities, limited memory, limited processing
resources balanced, and handling the computation of tasks capabilities, etc. Many researchers have addressed these issues
against the conflicting scheduling parameters, are the major with disparate methods. For instance, in [7], the authors
concerns [6] which compels designing an efficient load suggested a Cloud, Fog, Edge, and IoT-driven collaborative
balancing and task offloading algorithm to increase the real-time architecture called Mobi-IoST that uses hierarchical
resource utilization while reducing energy consumption, information, and mobility information for improving QoS,
communicational and computational latency, and service time. delay and power consumption. The researchers of [8]
introduced an Energy-Efficient Task Offloading (EETO)
A. PROBLEM DEFINITION method coupled with a layered Fog network employing the
The problem based on the task scheduling and load balancing Lyapunov optimization method to handle the energy-
approaches along with the considered QoS and scheduling performance exchange by jointly scheduling and offloading
parameters is presented in this section. In [26], authors the real-time IoT applications. For implementing a software
proposed a heuristic-based makespan-cost scheduling and communicating with the virtualization-based Edge
algorithm that primarily maintains the tradeoff between the Federated Learning (EFL) surrounding, the authors of [9]
scheduling cost and makespan for the usage of Cloud presented a decentralized software-defined network (SDN)
resources. However, it is not a viable solution to satisfy the controller as an agent. The investigation of task offloading
ever-increasing demands of the growing customers that aim to issues with the optimization targets of latency, multi-node load
schedule in a large-scale application. In contrast, the proposed balancing, and energy utilization inside an "End-Edge-Cloud"
approach uses a metaheuristic method which improves the collaborative framework for a 6G network led to the
makespan and scheduling cost. Due to the implementation of development of an enhanced AR-MOEA in [10].
a metaheuristic-based scheduling algorithm, the utilization of This contribution primarily concentrates on developing an
all the Fog nodes is meliorated significantly. energy-efficient and computationally effective offloading
In [27], authors proposed algorithms for finding an strategy that reduces energy utilization and latency while
equilibrium between CPU execution time and memory increasing the utilization of resources in the Fog-Cloud
allocation while balancing the loads amidst Fog nodes. In this paradigm. Primarily, this study aims to address the demands
approach, when the number of tasks increases the response of latency-sensitive IoT devices. In recapitulate, the
time also increases. Authors in [27] do not consider dynamic contributions are delineated as follows:
requests. In comparison, the response time is considerably • Devised an IoT-Fog-Cloud architecture that implements a
minimized due to the improvement in makespan, Fog node method to reduce the overall time of latency-sensitive tools
utilization and load balancing. The dynamic requests having by taking into consideration the computational and resource
disparate lengths and sizes are considered for execution in specifications,
heterogeneous resources. In [28], authors proposed a novel • Implemented a method using Fuzzy logic that determines the
strategy for minimizing latency in Fog-Cloud of Things task offloading target layers,
environments. This strategy prioritizes tasks based on • Fog node clustering using Self-adaptive Fuzzy C-means++
deadlines and QoS constraints, while also considering clustering techniques for scheduling the high-end and low-
resource limitations and dependencies between tasks. The end tasks concurrently in different clusters of Fog nodes has
implementation of the proposed strategy may introduce been proposed,
additional overhead due to communication between devices, • Proposed a task scheduling algorithm using the BLWJAYA
computation for decision-making, and potential data algorithm for scheduling and offloading the IoT requests.
migration. The authors did not quantify these overheads, • Numerous simulations have been carried out using iFogSim
which could impact the overall performance of the system. over CloudSim by considering tasks and machine
In [29], a novel AI-powered framework called HunterPlus heterogeneity.
has been proposed to optimize energy consumption in task
scheduling across Cloud and Fog resources. HunterPlus The following sections have been organized as such, where
leverages reinforcement learning to dynamically allocate Section II introduces the detailed description of each layer for

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

the devised system framework. Section III proposes the Primary Resource Controller (PRC) for prioritizing the tasks
computational models with corresponding working principles based on Fuzzy rules.
that support scheduling with offloading and resource Layer 3 (Cloud layer): This layer contains a Cloud datacenter
management followed by the proposed objective function. which is defined as a physical place encompassing several
Section IV implements the experimental results through physical hosts denoted with 𝐻 = {𝐻" , 𝐻# , 𝐻$ , … . , 𝐻% }, which
extensive simulations followed by a comparative analysis of further consists of a set of heterogeneous VMs denoted with
obtained results with the existing approaches. Subsequently, 𝑉𝑀 = {𝑉𝑀" , 𝑉𝑀# , 𝑉𝑀$ , … , 𝑉𝑀& }, where 𝑉𝑀' : 1 ≤ 𝑗 ≤ 𝑚.
the concluding remarks as well as the future insights are Each VM is characterized by some processing speed
highlighted in Section V. expressed in MIPS. Consider, there are numerous dynamically
independent IoT requests denoted with 𝑇=
II. PROPOSED SYSTEM ARCHITECTURE {𝑇" , 𝑇# , 𝑇$ , … . , 𝑇( }, where 𝑇! : 1 ≤ 𝑖 ≤ 𝑛 expressed in MI
This section presents a 3-layered IoT-Fog-Cloud architecture (million instructions), are to be executed by the created VMs
for load balancing and task scheduling in Fog computing. As under some hosts [12, 13].
presented in Figure 1, the IoT-Fog-Cloud continuum contains
three layers from top to bottom: the Cloud layer (storage and
on-demand service provider), the multiple Fog nodes (middle
layer scheduler), and at the bottom, a variety of wide-spread
IoT devices. In this approach, we have considered a set of
independent IoT devices 𝐷(𝐷! , 𝑖 = 1, 2, 3, … . , 𝑑), and a set
of high-performing Fog nodes 𝐹(𝐹! , 𝑖 = 1, 2, 3, … . , 𝑓), for
processing the voluminous data generated by the wide
deployment of IoT devices. Each layer with its corresponding
working nature is elucidated as follows:
Layer 1 (IoT layer): This layer encompasses multiple internet-
enabled smart devices which are geographically distributed.
There are a set of 𝑛 user requests denoted by 𝑅 where 𝑅 =
{𝑅! , 𝑖 = 1, 2, 3, … , 𝑛} which are generated through these IoT
devices and are forwarded to the routers placed on the edge of
the network for processing by target nodes.
Layer 2 (Fog layer): the proposed Fog layer has been
segregated into two parts, i.e., Fog Layer 1 which contains all
the intermediate networking nodes like routers, switches, etc.,
deployed at the network’s edge and Fog Layer 2, which is
referred to as the deployment of all the Fog nodes to process
the request. All the Fog nodes are distributed geographically
in the Fog eco-system which is controlled as well as managed
by the same Cloud service provider [11]. All the Fog nodes are
segregated into fixed-size clusters at this level. Each cluster
possesses a cluster head, whose selection, and the process of
partitioning Fog nodes into clusters are decided by the Fuzzy
C-means++ clustering approach. Each cluster head is in sync FIGURE 1. The proposed 3-layered IoT-Fog-Cloud Architecture.
with a controller called Fog Controller (FC), a centralized and
independent entity in the Fog layer whose responsibility is III. COMPUTATIONAL MODEL AND PROBLEM
managing, deploying, and planning application services. The FORMULATION
Fog broker maintains a record called Fog Information Service
(FIS). It is a repository, analogous to the CIS (Cloud A. SCHEDULING PARAMETERS
Information Service) in a Cloud environment, which keeps all Consider a problem with 𝑛 number of tasks and 𝑚 number of
the information, such as IoT requests, infrastructure (Fog Fog nodes/VMs of 𝑑-dimensions. So, each particle in the
nodes, VMs, sensors, actuators, routers, etc.) and decisions system is represented as 𝑋!' in 𝑛 × 𝑚 2-dimensional matrix
about offloading. A task classifier and a fuzzy logic unit make
up Fog Layer 1, which determines the desired layers for 𝑋 = {𝑋"" , 𝑋## , 𝑋$$ , … 𝑋(& }, where 𝑋!' = {𝑖 ∈ (1,2,3, … 𝑛); 𝑗 ∈
offloading. To reduce the delay, each end-user request (task) (1,2,3, … , 𝑚)}. This representation implicates the task
has been classified into high, medium, and low priorities allocation onto corresponding Fog nodes in which IoT
depending on some parameters (length of the task, start time, requests are to be processed. The following equation
processing time and user-specified delay). The determined represents the allocation table sub-set to the assignment of task
layer for offloading the IoT requests is decided by a middle- (request) onto Fog nodes/VMs. This mapping is illustrated
level method called Fuzzy logic implemented before the with six tasks and three Fog nodes.

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

1, if VM) holds a task T* by computationally efficient Fog nodes. Contrary, local nodes
𝑋!' = @ (1)
0, otherwise can handle low-priority tasks because it is permissible for
1) COMPLETION TIME them to miss their deadlines or create delays. In the proposed
The time required to complete the execution of a task 𝑇! on the system, tasks are classified into high, medium, and low
depending on their requirements. Therefore, a Fuzzy logic
𝑗+% Fog node R𝐶𝑇!' T. It is expressed in milliseconds (ms) in
system is used to classify the tasks and hence, determine the
units and is denoted in Eq. 2. The completion time does vary target layers for offloading. Each task (user request) is placed
for different tasks due to the involvement of variable lengths on the respective queues (𝑄" , 𝑄# , and 𝑄$ ) based on the task’s
and disparate configurations of Fog nodes. priority (high, medium, and low). Here, Fuzzy logic is used to
,
𝐶𝑇!' = !$ (2) divide the tasks and place them in queues. These queues are
-."
#
connected to the FC. Then, the FC decides where to offload
where, 𝐿! indicates the length of the request (task) 𝑇! in MI, the tasks (local Fog nodes, Cloud, or collaborative Fog nodes)
the processing time of the 𝑗+% Fog node R𝑓(' T is represented by based on the Fuzzy inference output. This approach expedites
'
𝑃𝑇/# , and 𝑃𝑇/# is the overall processing time, which is the computations of disparate tasks and thereby resulting in
estimated in Eq. 3 and Eq. 4. preventing from starvation and aging issues in scheduling.
$
' ,0/# 1×& 1) FUZZY LOGIC MODEL
𝑃𝑇/# = ,(/#)
(3) This model is the entry point for all the IoT requests generated
' from the different applications which are used to classify the
𝑃𝑇/# = ∑&'5" 𝑃𝑇/# (4)
tasks according to their lengths and place the tasks into
where, 𝐿R𝑓(' T indicates the load of the 𝑗+% Fog node 𝑓(' , 𝐿(𝑓( )
corresponding queues. It is required to identify the suitable
is the total load, and 𝑚 is the number of Fog nodes. Fog node for offloading the tasks. It consists of three modules:
2) DEGREE OF UTILIZATION (a) Fuzzy inputs, (b) Fuzzification, and (c) Defuzzification.
The key objective is to reduce the makespan to increase the Fuzzy inputs: Initially, the required input attributes are to be
utilization of VMs in the Cloud as well as in the Fog paradigm. specified for the fuzzification in the Fuzzy logic. The
Hence, these two factors are dependent on each other. necessary inputs are task size which represents the length of
Therefore, the makespan could be described as the sum of all the tasks and depends on other input parameters like network
job completion times on a 𝑗+% Fog node. On the other side, the bandwidth, latency and VM utilization; network bandwidth
proportion of the overall makespan of tasks assigned is what which is required to decide the amount of bandwidth needed
is meant by the second contributing component, i.e., to offload a task either to a Fog layer or to a Cloud; latency
utilization. Hence, the makespan (𝑀𝑎𝑘𝑒𝑠𝑝𝑎𝑛), the utilization sensitivity which is associated with the tasks defining the
of a Fog node R𝑓('%&!' T, and average utilization R𝑎𝑣𝑔𝑓(%&!' T are degree of computation or communicational latency for
expressed through the following mathematical equations. latency-sensitive applications requiring a hard-time bound;
and the degree of Fog nodes/VMs utilization which helps the
𝑀𝑎𝑘𝑒𝑠𝑝𝑎𝑛 = 𝑚𝑎𝑥+𝐶𝑇!" |𝑖 ∈ (1,2, … 𝑛); 𝑗 ∈ (1,2, … 𝑚)9 (5)
$ broker to identify the degree of utilization of all VMs deployed
∑# 7.
𝑓('%&!' = 89:;<=9(
!() !
(6) within Fog layer 2 as Fog nodes or in the Cloud as VMs. All
$ these parameters are represented in the form of high, med, and
∑*
$() /#%&!'
𝑎𝑣𝑔𝑓(%&!' = (7) low as lexical variables. These classifications represent the
&
dynamic and heterogeneous characteristics of the IoT requests
3) SERVICE RATE over Fog-Cloud architecture. Fuzzification: In this step, the
The primary QoS parameter is the Service rate as the end-to- Fuzzifier accepts all the necessary parameters as numerical
end service rate is most notable for latency-sensitive inputs from the infrastructure which is managed and
applications for offloading the tasks. The target layer on which controlled by the broker. These sets of input values are then
a task must be processed depends on the service rate of each processed by an inference engine by consulting the Fuzzy
task. In our approach, the tasks are forwarded to different Knowledge Base (FKB). The inputs are afterwards computed
target layers for processing depending on their priority. Hence, by defined Fuzzy membership functions. Based on the
the service rate would be different. So, we have considered the knowledge base system, some inferences would result in terms
average service rate. It depends on two factors while of fuzzy rules. The fuzzy rules are then integrated and
processing on each layer: network and processing time. So, it generated as outputs by the defuzzification process. The
is usually represented as follows, where 𝑛 indicates the total membership functions and fuzzy inferences are depicted in
amount of tasks. Figure 2. Table I delineates the required outputs from the
∑ .+,-./00!#1 &!*/ >∑ .#/&3-,4 &!*/
𝑆𝑅 = (8) inference rules.
(
De-fuzzification: It is a process to transform the fuzzy
inference outputs to an appropriate value according to the
B. FUZZY LOGIC MODEL FOR TASK CLASSIFICATION
membership functions. This paper adopts a mean of the
Meeting the deadline and reducing the delay are the two prime
maximum method to generate outputs in a Fuzzy logic system
factors in meliorating the throughput. Moreover, high-priority
because the outputs have almost maximum values due to the
requests are required to meet the deadline or latency
involvement of task heterogeneity and dynamic bandwidth
constraint. Consequently, all such tasks should be processed

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

requirements. The process of the proposed Fuzzy logic system The gap between the Fog nodes holds significance because
is depicted in Figure 2. it is a deciding factor in determining which cluster head will
appear in the next round of clustering. Hence, the distance
between the Fog nodes is estimated using the Euclidean
formula. So, the distance between two Fog nodes of two
nearby clusters 𝐹𝑐R𝑉𝑀'! , 𝑗 = 1, … , 𝑚, 𝑖 = 1, … , 𝑀T and
𝐹𝑐R𝑉𝑀:! , 𝑘 = 1, … , 𝑚, 𝑖 = 1, … , 𝑀T is estimated in Eq. 9.
#𝐹𝑐&𝑉𝑀!" ) − 𝐹𝑐&𝑉𝑀#" )# =
$ $ $
,∑% " "
"&' .𝐹𝑐&𝑉𝑀! ) − 𝐹𝑐&𝑉𝑀# )/
,&𝐹𝑐(𝑉𝑀'' ) − 𝐹𝑐(𝑉𝑀'$ )) + ⋯ + &𝐹𝑐(𝑉𝑀%
( ) − 𝐹𝑐(𝑉𝑀( ))
%

(9)
The FCM++ is an improved version of the FCM clustering
algorithm. It has been used to divide the Fog nodes into
clusters. Here, a self-adaptive FCM++ clustering technique is
proposed to leverage the setting of initial cluster heads
arbitrarily. Therefore, it releases the burden of setting the
initial cluster heads randomly from users. Hence, the quality
FIGURE 2. Procedure of offloading decision using Fuzzy Logic of the clustering can be improved. (10 × 𝑀) is considered to
Architecture.
TABLE I
be the starting cluster head, which is uniform in nature, where
FUZZY OUTPUT ON INFERENCES 𝑀 is the set of design variables. In our case, the amount of Fog
Fuzzy Inputs Output nodes is represented by 𝑀. Then, using Eq. 10 a fresh cluster
Size of Bandwidth head can be formulated.
VM Latency
Tasks of Network Target Layer ! ! !
(MI) (Mbps)
Utilization Sensitivity 𝑅(;? = 𝑟𝑜𝑢𝑛𝑑R𝑅@AB + 𝑟𝑎𝑛𝑑 × 𝑅@AB T, 𝑟𝑎𝑛𝑑 (−0.5, 0.5)
+1 +1 +1 +1 Cloud is an arbitrary number. (10)
+1 +1 +1 0 Cloud
+1 +1 0 -1 Cloud
D. LATENCY MODEL
+1 +1 0 +1 Cloud
+1 0 -1 0 CFN With each scheduling, the associated latency is involved. This
+1 0 -1 -1 CFN factor also differs in the location of processing. There are
+1 0 +1 +1 Cloud different latency rates for the scheduling on different target
+1 0 +1 0 Cloud
0 -1 0 -1 CFN
layers. For example, the amount of time required to process a
0 -1 0 +1 CFN request (task) at a local Fog node would not be the same for
0 -1 -1 0 CFN those tasks executing on either collaborative Fog nodes or
0 -1 -1 -1 CFN Cloud VMs. Generally, it is the summation of execution time
0 +1 +1 +1 CFN
0 +1 +1 0 CFN
and transmission time from IoT devices to different layers and
0 +1 0 -1 CFN the acknowledgements to IoT devices depending on the type
0 +1 0 +1 CFN of tasks. It is expressed in milliseconds (ms). Therefore, it can
-1 0 -1 0 LFN be mathematically formulated below as it is the time duration
-1 0 -1 -1 LFN
-1 0 +1 +1 LFN
for the execution of a task on any given layer and the
-1 0 +1 0 LFN corresponding responses.
-1 -1 0 -1 LFN The latency of a request 𝑖 transmitted by an IoT device is
-1 -1 0 +1 LFN characterized by the transmission time, processing time, and
-1 -1 -1 0 LFN
-1 -1 -1 -1 LFN
response time of a Fog node/VM for a request. Therefore, it is
where +1 denotes high, 0 denotes medium, -1 denotes low, LFN denotes the summation of the total two levels of latency depending on
local Fog nodes and CFN denotes collaborative Fog nodes. the location of offloading: (a) the transmission latency to send
a request from an IoT device to Fog R𝑇,[email protected]@F T (at the Fog
C. CLUSTERING MODEL layer), (b) the transmission latency to send a request from an
Clustering is an unsupervised learning technique that IoT device to Cloud (𝑇,[email protected]@GB ) (at the Cloud layer).
partitions the Fog nodes into several clusters. Clusters are
1) TRANSMISSION LATENCY FROM IOT TO FOG
formed to distinguish between local and collaborative Fog
It is the sum of (i) processing time (latency) of the request 𝑖 on
nodes by placing local Fog nodes in one cluster and a mix of E@F
collaborative Fog nodes in several clusters. Besides, a mixture a Fog node 𝑓( , expressed in 𝑇- , (ii) transmission latency of
of homogeneous and heterogeneous Fog nodes are placed in sending an acknowledgement to the IoT device, expressed in
each cluster to cope with the growing demands. This facilitates 𝑇,E@FDC@.
5.4
, and (iii) the transmission latency when a Fog node
reducing the response time as well as the possibility of is incapable to schedule, it offloads to the Cloud VM for
overloaded conditions. processing, expressed in 𝑇,E@FD7A@GB . So, these three steps are
mathematically formulated as follows:

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

[email protected]@GB 7A@GB
First, the transmission latency to send a request 𝑖 to one of 𝑇,7A@GB = 𝑇,(!,M8) + 𝑇-(!,M8) + 𝑇,7A@GBDC@.
5.4
(19)
the Fog nodes 𝑓( from IoT to Fog is calculated in Eq. 11. Therefore, the total latency caused by the IoT-Fog-Cloud
𝑇,[email protected]@F
(!," )
= 𝐿! × 𝑇,(? (11) environment for a request 𝑖 is the summation of the total
#
Note: 𝐿! is the length of the 𝑖+% request, and 𝑇,(? implies the latency caused by a Fog node and a VM at the Fog layer and
transmission latency caused by transmitting each byte of Cloud layer, respectively. It is expressed as in Eq. 20.
request over the network from an IoT device to a Fog node. 𝑇,[email protected]@FD7A@GB = 𝑇,E@F + 𝑇,7A@GB (20)
Second, the processing latency of the task (request) 𝑖 on a
Fog node 𝑓( is expressed using Eq. 12. E. ENERGY CONSUMPTION MODEL
E@F H.0/# $1 The total amount of energy used when an IoT device transmits
𝑇- = 𝐿! × (12) a request to the Cloud or Fog layer via a router at the network's
I.0/# $ 1
edge, as well as the amount of time it takes for the Cloud
Note: 𝑆𝑇R𝑓( ' T denotes the service time (ms) taken by the 𝑗+%
VM or Fog node to fulfil the request. Due to the nature of
Fog node 𝑓( , and 𝑁𝑇R𝑓( ' T denotes the total number of tasks offloading, the degree of energy consumption does vary. In
holding by a 𝑗+% Fog node 𝑓( . this regard, we divided the total energy consumption into three
Third, the transmission latency of sending an levels: (a) consumption of energy during the propagation of a
acknowledgement to the respective IoT device is denoted in task from an IoT device (IoT layer) to a Fog node (Fog layer).
Eq. 13. It also includes the energy consumption of intermediate nodes
𝑇,E@FDC@.
5.4
= 𝐿! × 𝑇,(? (13) (routers or switches) at the edge of the network during
Fourth, the transmission latency incurred due to the propagation. Besides, it involves the energy consumption
offloading of request 𝑖 to the Cloud VM upon a failure of a when a Fog node sends an acknowledgement to the same IoT
Fog node/server is expressed in Eq. 14. smart device; (b) the amount of energy utilized for executing
H.0/# $ 1 and storing the IoT requests at the Fog layer by the Fog nodes
𝑇,E@FD7A@GB = 𝐿! × 𝑇,(? + 𝐿! × + 𝐿J! × 𝑇,(? (14) and at the Cloud by the VMs; (c) the energy consumed to send
I.0/# $ 1
the high-end requests to the Cloud. It includes the energy
Note: 𝐿J!
× 𝑇,(?represents the acknowledgement from the
consumed for sending the acknowledgement from a VM
Cloud to the Fog node for the request 𝑖. (Cloud) to the IoT device (IoT layer). In addition, it involves
Therefore, the total latency experienced by a Fog node 𝑓( the energy consumption of forwarding a request and receiving
for a request 𝑖 by combining the Eq. 11, 12, 13, and 14, we get: an acknowledgement to/from the Cloud/Fog layer upon an
𝑇,E@F = 𝑇,[email protected]@F
(!," )
+ 𝑇-E@F + 𝑇,E@FDC@.
5.4
+ 𝜑𝑇,E@FD7A@GB (15) unforeseen scenario (failure of a Fog node/cluster/server).
#
.K
0/#& Therefore, the energy consumption at each layer is
Note: 𝜑 j= .K k is the proportion of the overall task
mathematically expressed as follows:
(request) sent from the Fog to the Cloud for processing.
1) ENERGY CONSUMPTION AT THE IOT LAYER
2) TRANSMISSION LATENCY FROM IOT TO CLOUD In our approach, we assume that the consumption of energy
For some high-end requests, IoT or edge of the network takes place during propagating the requests from the wide
forwards the request 𝑖 to the Cloud VM directly. So, the deployment of IoT devices to the Fog Controller (FC) of the
transmission latency constitutes of three latencies: (i) Fog layer via intermediate nodes. So, the energy is consumed
transmission latency of sending request 𝑖 from an IoT device at two stages: (i) at the IoT-router level, and (ii) at the router-
[email protected]@GB
to a Cloud VM R𝑇,(!,M8) T, (ii) the execution latency of Fog level. Moreover, the high-end tasks are also sent to the
+% 7A@GB Cloud layer for execution which is explained at the Cloud
request 𝑖 at the 𝑗 VM of the Cloud R𝑇-(!,M8) T, and (iii) the
transmission latency to send an acknowledgement to the IoT layer. The two cases are represented mathematically as
device for request 𝑖 R𝑇,7A@GBDC@. T. Hence, each scenario is follows:
5.4
At the IoT-router level (IoT-r): The energy consumption by a
mathematically represented as follows:
First, sending a request 𝑖 to the Cloud VM, the transmission 𝑗+% router, when a request 𝑖 is offloaded by an IoT device, is
expressed as in Eq. 21.
latency is calculated in Eq.16.
[email protected]@GB (!,P -!9'/ -*5:
𝑇,(!,M8) = 𝐿! × 𝑇,(? (16) )
𝐸𝑐[email protected] (?
= 𝐿! × 𝐸𝑐[email protected] + 𝐿! × 𝑔Q × m ,$
+ ,$
n(21)
7 !9'/ $
P%&!' 7 *5:
Second, the processing latency at the 𝑗+% VM is estimated ,$
(?
$,

using Eq. 17. Note: 𝐿! is the width of the 𝑖+% task (request), 𝐸𝑐[email protected]
7A@GB H.NM8$ O represents the energy used to communicate every byte of task
𝑇-(!,M8) = 𝐿! × (17) (request) from a smart IoT device to the 𝑗+% router over a
I.NM8$ O
Third, the transmission latency for sending an network, 𝑔Q is the gain of the channel, 𝑃P!BA; $ indicates the
acknowledgement for the 𝑖+% request is evaluated as in Eq. 18. &9R
power consumption during idle state, and 𝑃P $ represents the
𝑇,7A@GBDC@.
5.4
= 𝐿! × 𝑇,(? (18) power consumption at peak of the 𝑗+% router, 𝐶P!BA; indicates
$
Therefore, the total transmission latency caused by a VM &9R
the idle capacity and 𝐶P $ denote the maximum capacity of
for a request 𝑖 at the Cloud is estimated by combing the above
Eq. 16, 17, and 18 as: the 𝑗+% router, while the degree of utilization of the 𝑗+% router
'
is denoted by 𝑟G+!A .

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

At the router-Fog level (r-Fog): The energy consumption by a 0/#& .K


IoT device, 𝜑 j= .K k is the proportion of the overall
𝑗+% Fog node, when a request 𝑖 is offloaded by a router to a
request forwarded from the Fog to the Cloud for processing.
Fog node, is expressed using Eq. 22.
?$%&'
# ?()*
#
3) ENERGY CONSUMPTION AT THE CLOUD LAYER
(!, ; ) <= " "!
𝐸𝑐45678
!
= 𝐿! × 𝐸𝑐45678 + 𝐿! × 𝑔> × @@ $%&'
!
+ # A (22) There are three scenarios regarding the consumption of energy
# ;! +,$& @ ()*
#
"! "! for VM at the Cloud layer. Consumption of energy takes place
(?
Note: 𝐸𝑐PDE@F represents the energy required for transmission during (i) sending a request to the Cloud layer, (ii) processing
of a request of size one byte from a router to the 𝑗+% Fog node, and storing at the Cloud VM, and (iii) sending the
𝑃!BA;
$ and 𝑃&9R
$ are the idle and maximum energy utilized in acknowledgement from the Cloud layer to the corresponding
/# /#
IoT device. All these three scenarios are mathematically
the 𝑗+% Fog node, 𝐶 !BA;
$ and 𝐶 &9R
$ denote the idle and represented as follows:
/# /#
'
maximum capacity of the 𝑗+% Fog node, and 𝑓( G+!A is the First, the consumption of energy during the transmission of
each byte of a request (task) from a smart IoT device to a
degree of utilization of the 𝑗+% Fog node.
Cloud VM is estimated in Eq. 27.
2) ENERGY CONSUMPTION AT THE FOG LAYER -!9'/$
(!,M8) (?
When a request arrives at the Fog layer (FC), it is sent to one 𝐸𝑐[email protected]@GB = 𝐿! × 𝐸𝑐[email protected]@GB + 𝐿! × 𝑔Q × m <=
+
7 !9'/$
of the cluster heads for processing by one of the <=
-*5:
computational-rich Fog nodes depending on the tasks’ <=$
n (27)
$
specifications. Then, the corresponding task is processed and M8%&!' 7 *5:
<=$
stored by the underlying Fog node. Thus, while processing and Second, the energy consumption takes place by a 𝑗+% VM
storing, some of the energy is consumed and this factor is due to processing and storing of 𝑖+% request is calculated as in
-&H ! Eq. 28.
mathematically denoted as 𝐸𝑐E@F . Hence, it is written using
-&H ! -! H!
Eq. 23. 𝐸𝑐M8 $ = 𝐿! × j𝐸𝑐M8 $ + 𝐸𝑐M8 $ k (28)
! ! !
-&H
𝐸𝑐E@F = 𝐿! × o𝐸𝑐 -$ + 𝐸𝑐 H$ p (23) where, the energy consumption due to processing on the 𝑗+%
/# /#
! ! VM is estimated using Eq. 29.
Note: 𝐸𝑐 -$ and 𝐸𝑐 H$ imply the energy consumption for - ! 9Q+!T; !BA; H.NM8$O
/# /# 𝐸𝑐M8 $ = s𝑟" × 𝑃M8 $ + (1 − 𝑟" ) × 𝑃M8 $t × (29)
I.NM8$ O
processing and storing of each request 𝑖 in a 𝑗+% Fog node 𝑓( . 9Q+!T; !BA;
! Note: 𝑃M8 $ and 𝑃M8 $ denote the power consumption of idle
The term 𝐸𝑐 -$ [14] in Eq. 23 is estimated as in Eq. 24. .5.&!;/
/#
$
and active 𝑗+% VM, 𝑟" is the ratio j= .&-&5'
k of active time to
! H.0/# 1
𝐸𝑐 -$ = q𝑟" × 𝑃9Q+!T;
$ + (1 − 𝑟" ) × 𝑃!BA;
$ r× (24) +%
the total time for the 𝑗 VM, (1 − 𝑟" ) denotes the elapsed
/ # / # / # I.0/# $ 1
time for an idle VM.
Note: 𝑃9Q+!T;
$ and denotes the power consumption of idle and Third, the energy consumption resulting from a VM for
/#

active 𝑗+% Fog node 𝑓( , 𝑟" is the ratio j=


.5.&!;/
k of active time sending an acknowledgement to the requested IoT device is
.&-&5' (9Q:,!)
+% expressed as 𝐸𝑐7A@GBDC@. .
to the total time for the 𝑗 Fog node 𝑓( , (1 − 𝑟" ) denotes the
Hence, by combining Eq. 27 and Eq. 28, we get the total
elapsed time for an idle Fog node.
energy consumption by a VM at the Cloud layer expressed in
Upon failure of Fog nodes, the requests are offloaded to the
Eq. 30.
Cloud. Hence, the transmission of requests from the Fog layer (M8,!) (!,M8) -&H ! (9Q:,!)
to the Cloud consumes a great deal of energy. Therefore, the 𝐸𝑐7A@GB = 𝐸𝑐[email protected]@GB + 𝐸𝑐M8 $ + 𝐸𝑐7A@GBDC@. (30)
(9Q:,!)
energy consumption caused by offloading the task (request) 𝑖 where, 𝐸𝑐7A@GBDC@. is the energy consumption caused by
(/# ,!,M8)
from a Fog node 𝑓( to a Cloud VM is denoted as 𝐸𝑐E@FD7A@GB , sending an acknowledgement from a Cloud VM to an IoT
which is expressed using Eq. 25. device.
(/# ,!,M8)
𝐸𝑐E@FD7A@GB
(9Q:,!)
= 𝐿! × R𝐸𝑐E@FD7A@GB + 𝐸𝑐7A@GBDE@F T (25) So, from Eq. 26 and Eq. 30, we get the total energy utilized
(9Q:,!) by the Fog layer and the Cloud, which is represented using Eq.
Note: 𝐸𝑐7A@GBDE@F is the energy required to transmit the 31.
acknowledgement to the Fog node, and to send each byte of (!, /# ,M8) (/# ,!) (M8,!)
𝐸𝑐[email protected]@FD7A@GB = 𝐸𝑐E@F + 𝐸𝑐7A@GB (31)
request from a Fog node to a Cloud VM 𝐸𝑐E@FD7A@GB
represents the utilized energy. F. TASK SCHEDULING MODEL
Therefore, by combining Eq. 22, 23, 24, and 25, we get the The IoT requests generated from the latency-sensitive
total energy consumption by the Fog layer [15]. It is applications require computationally efficient resources,
represented using Eq. 26. which can execute tasks within a timebound resulting in
(/# ,!) (!,P) (!,/# ) -&H ! (/# ,!,M8)
𝐸𝑐E@F = 𝐸𝑐[email protected] + 𝐸𝑐PDE@F + 𝐸𝑐E@F + 𝜑𝐸𝑐E@FD7A@GB + minimum delay and execution time. Therefore, an efficient yet
(9Q:,!) improved metaheuristic-based task scheduling algorithm is
𝐸𝑐E@FDC@. (26)
(9Q:,!) proposed called the binary linear-weight JAYA (BLWJAYA)
Note: 𝐸𝑐E@FDC@.
is the energy consumed to send the algorithm.
acknowledgement from the concerned Fog node to the smart

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

The proposed binary-based LWJAYA algorithm is an Each particle is evaluated through a fitness function.
improvement of the standard JAYA algorithm. It is used to Therefore, a fitness function has been proposed subject to the
speed-up convergence by trading off between the exploration reduction of delay and power consumption of both Fog nodes
and exploitation parts during the execution phase [16]. and Cloud datacenter while improving the utilization of
According to this variant, the standard equation [17] has been computational resources which has been formulated in Eq. 43.
altered by inserting a weight constraint to the existing
equation. As a result, the number of iterations and computation G. LOAD BALANCING MODEL
burden are reduced by improving the searching ability in terms Load balancing is associated with the factors contributing to
of getting optimal results. The primary goal of inserting such enriching the performance of the computation, such as latency,
a weight parameter in the equation is to meliorate the service time, resource utilization, response time, makespan,
exploration by reducing the disparity between optimal and etc. Therefore, the formulation of the capacity and load of a
sub-optimal solutions. By dynamically altering the searching Fog node is explained as follows:
capability, the dynamic weight in the updated equation (Eq. The capacity of a Fog node is the total load it can manage
32) enhances the local and global optimum. Hence, the at a given time 𝑡. The capacity is characterized by three
equation for improving the position of particles of the standard parameters, such as the processing speed (MIPS) per Fog node
JAYA can be improved as in Eq. 32. '
j𝑓( 8C-H(7-W) k, number of processing elements (CPU cores)
𝑋!ABC = 𝑋!A + 𝜔 D𝑐C E𝐵𝐸𝑆𝑇! − |𝑋!A |I − 𝑐D E𝑊𝑂𝑅𝑆𝑇! − |𝑋!A |IM(32) ' '
(V> DV' )! per Fog node j𝑓( (G&(7-W) k and the bandwidth R𝑓( X9(B?!B+% T.
with 𝜔 = 𝜔% −
+*5:
Hence, the capacity of a Fog node j𝐶R𝑓(' Tk is expressed as:
Note: 𝑡&9R is the maximum value of iterations, 𝑖 is the current
iteration, 𝜔 is the weight value which exponentially reduces 𝐶R𝑓(' T = 𝑓( (G&(7-W)
' '
× 𝑓( 8C-H(7-W) '
+ 𝑓( X9(B?!B+% (35)
from a high to a low value over the iteration, and the highest The capacity of all the Fog nodes in a cluster is denoted as:
and lowest values of 𝜔 are represented by 𝜔% and 𝜔A 𝐶(𝑓( ) = ∑& '
'5" 𝐶R𝑓( T (36)
respectively. The load of a Fog node is the total amount of tasks (MI)
The set of iterations represents the number of times the 𝑋!: assigned at a given time 𝑡. It is characterized by the overall
is updated to evaluate the 𝑋!:>" for getting the optimal results
number of tasks allocated to a Fog node 𝑓(' at a time 𝑡 in MI
for the fitness function. If the fitness value for 𝑋!:>" has a with the simulation time 𝑇 of that Fog node in MIPS.
better value in comparison to the 𝑋!: , then the 𝑋!:>" would be Simulation time is the time needed to execute every request
accepted otherwise the value of 𝑋!: would be set as the new (task) at a given time. Hence, it is expressed in Eq. 37.
best fitness value. This process will iterate until the ∑# .&
termination criterion is met. Eq. 32 states that the parameter is 𝐿R𝑓(' T = !() !
(37)
.
multiplied by the distinction between the optimal and worst Loads of all the Fog nodes in a cluster are calculated using
solutions as well as the value of the current location. In this Eq. 38.
'
work, 𝜔 behaves as a regulating factor in the problem space 𝐿(𝑓( ) = ∑& '5" 𝐿R𝑓( T (38)
which regulates the speed of the particles and maintains the The Fog node’s state is determined by comparison and
trade-off in the search-space. The linearly decreasing weight analysis of the loads of every Fog node with their respective
factor is considered because the high value of 𝜔 manifests a capacity. If the load of a 𝑗+% Fog node at time 𝑡 is greater than
global search at the beginning of the execution while the low
its capacity j 𝐿R𝑓(' T > 𝐶R𝑓(' Tk, then the corresponding Fog
value of 𝜔 facilitates the local search at the end of the
execution. Therefore, with the relatively decreasing value of node is recognized as overloaded, and if the load of a 𝑗+% Fog
' '
𝜔, it has a greater chance of avoiding the worst solutions and node at time 𝑡 is less than its capacity j 𝐿R𝑓( T < 𝐶R𝑓( Tk, then
moving towards the best ones [18]. Therefore, the values of the corresponding Fog node is recognized as underloaded.
𝜔% and 𝜔A are set as 1.6 and 0.8. Otherwise, the Fog nodes are treated as normalized. The
The mapping of tasks to Fog nodes/VMs represents discrete estimation of load and capacity is applicable for all the Fog
values. Hence, a binary version of the LWJAYA algorithm has nodes of different clusters as well as for the VMs created in
been implemented in this work. This is accomplished by the Cloud datacenter.
utilizing a tangent hyperbolic logistic transfer function The absence of load variance using a standard deviation (𝜎)
[19] that transforms obtained continuous results into discrete allows one to determine the degree of system level imbalance.
values by using the following Eq. 33. The standard deviation is calculated using Eq. 39. Here, the
?@AB4C) @D
𝑡𝑎𝑛ℎRx𝑋!:>" xT =
; ! D"
(33) threshold value (𝑇𝑆% ) is based on how much the cluster’s
?@AB4C)
! @D computational resources are utilized to maximum level.
; >"
The modified value of 𝑋!:>" is expressed in discrete form Generally, the maximum utilization of any computational
through the following Eq. 34. resources is at most 80% or 90%. Hence, the threshold value
1, 𝑖𝑓 𝑟𝑎𝑛𝑑() < tanhRx𝑋!:>" xT (𝑇𝑆% ) is considered as 0.85 in our case.
𝑋!:>" = @ (34)
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝜎 = ~& ∑&
" '
'5" j𝑃𝑇/# − 𝑃𝑇/# k (39)

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

Note: 𝑃𝑇/'# is the processing time of the 𝑗+% Fog node and 𝑃𝑇/#
|Y|
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 ∑@5" 𝑂! × 𝜔! (43)
is the total processing time, which is calculated in Eq. 3 and Note: 𝑂 = {𝑂! |𝑖 = 1, … , |𝑂|} ∈ {1,2,3}|Y| is the number of
Eq. 4. objective functions, and 𝜔 = {𝜔! |𝑖 = 1,2,3} is the set of
weights of the difference among three constraints,
H. TASK OFFLOADING MODEL respectively.
To achieve equilibrium in the clusters of the Fog layer, task Subject to the following constraints:
offloading/migration is done by offloading tasks from the
!,' '
Overloaded Fog Node (OFN) list onto the compatible 𝑂" : 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 ∑(!5` 𝑇A9+;(Q_ × 𝜔" ≤ 𝑇𝑠ℎ,9+;(Q_ , ∀𝑖 ∈
Underloaded Fog Nodes (UFN) list. Before offloading, the 𝑇, ∀𝑗 ∈ 𝐹 𝑜𝑟 𝑉𝑀, 𝜔" (= 0.3) ∈ 𝜔 (44)
status regarding the availability and consumption of resources !,'
where, 𝑇A9+;(Q_ is the total latency of either Fog node or VM
for Fog nodes is to be checked for uniform distribution of '
loads. Therefore, the overall resources utilized (𝑅G<;B ) and the to process the request 𝑖, and 𝑇𝑠ℎ,9+;(Q_ is the latency
overall available resources (𝑅9T9!A ) of all the UFNs (𝑓(W ) are threshold for the Fog node/VM, defined as:
expressed using Eq. 40 and Eq. 41. '
𝑇𝑠ℎ,9+;(Q_ = min max‰𝐿!,' Š (45)
W W '∈E @P M8 !∈.
𝑅G<;B j𝑓( $ k = 𝐿 j𝑓( $ k (40) where, 𝐿!,' is the delay of the 𝑗+% Fog node or VM to process
W W
𝑅9T9!A j𝑓( $ k = 𝑅+@+9A − 𝑅G<;B j𝑓( $ k (41) the request 𝑖. This constraint ensures that the total completion
W time should be below acceptable threshold latency.
Note: 𝐿 j𝑓( $ k is the load of 𝑗+% underloaded Fog node, and
𝑅+@+9A is the total resources (assume 𝑅+@+9A = 1). 𝑂# : 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 ∑& '5" 𝑇bQ × 𝜔# , ∀𝑗 ∈ 𝐹 𝑜𝑟 𝑉𝑀, 𝜔# (=
The concern is to identify a compatible Fog node/VM for 0.3) ∈ 𝜔 (46)
offloading the tasks from overloaded Fog node (𝑓(Y ) into where, 𝑇bQ is the total energy consumption by Fog nodes or
underloaded Fog node (𝑓(W ). To do so, the cosine similarity VMs while offloading and processing IoT requests.
between every task of all the overloaded Fog nodes with the
Y W
underloaded Fog nodes is estimated. The pair j𝑇! 4 , 𝑓( $ k for 𝑂$ : 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 ∑& '5" 𝑅W+!A × 𝜔$ , ∀𝑗 ∈ 𝐹 𝑜𝑟 𝑉𝑀, 𝜔$ (=
which the value of cosine similarity is low then the respective 0.4) ∈ 𝜔 (47)
pair shows greater similarity. It means, that the corresponding where, 𝑅W+!A is the total resource utilization by Fog nodes or
Y
task R𝑇! 4 T would be offloaded to the compatible 𝑗+% VMs while processing the IoT requests.
W
underloaded Fog node j𝑓( $ k having better similarity J. PROPOSED ALGORITHM
depending on the obtained value of cosine similarity. Using Algorithm 1 explains the load balancing, task offloading and
Eq. 42 the value of cosine similarity (𝜃) [20] is estimated. task scheduling algorithm using binary LWJAYA. Moreover,
F$
E
.! 4 ×K5;5!' Z/# [ this algorithm can also be implemented in Cloud Scheduler for
D"
𝜃 = cos • E F$ ‚ (42) executing the tasks intended for Cloud VMs.
\.! 4 \×]K5;5!' Z/# []
Y4
Note: 𝑇! is the 𝑖+% task of 𝑘+% Overloaded Fog node, and Algorithm 1: Load balancing, task offloading and task scheduling using
W binary LWJAYA algorithm
𝑅9T9!A j𝑓( $ k is the resource availability of the 𝑗+% underloaded Input: Set of IoT devices 𝐷!"# $
, set of IoT requests 𝑇$ , and set of
Fog node. computational resources at Fog layer and Cloud 𝐶𝑅%
Output: A balanced state of a Cluster(s)/VMs and Mapping of tasks 𝑇$ to
the target computational resources 𝐶𝑅%
I. PROBLEM FORMULATION 1. for 1: m
Consider a task scheduling problem with optimal mapping of 𝒊𝒇(𝜎 ≤ 𝑇𝑆& )
tasks unto the compatible Fog nodes/VMs in the Fog-Cloud The system is normalized;
computing system. The set 𝑇 = {𝑇" , 𝑇# , … , 𝑇( } consists of a 𝒆𝒍𝒔𝒆
set of requests/tasks, 𝐹 = {𝑓(" , 𝑓(# , … , 𝑓(& } contains a series of
check for the possibility of load balancing;
Fog nodes (𝑓( ) distributed in the Fog layer, and 𝑉𝑀 =
2. for 1: m
{𝑉𝑀" , 𝑉𝑀# , … , 𝑉𝑀& } encompasses an array of VMs created Estimate capacity, total capacity, load, and total load of Fog
under hosts in a Cloud datacenter. The key consensus of this nodes using Eq. 35, 36, 37, and 38;
study is to map a request 𝑖 {∈ 𝑇} to a compatible Fog node 𝒊𝒇/𝐿(𝑓' ) > 𝐶(𝑓' )3
𝑓(' {∈ 𝐹} or a Cloud VM 𝑉𝑀' {∈ 𝑉𝑀} that improves the Load balancing cannot be initiated;
considered constraints, i.e., reduction in latency and energy 𝒆𝒍𝒔𝒆
utilization while increasing the use of resources. These Initiate the load balancing strategy;
constraints would be effective if it guarantees that only one
3. for 1: m
request should be assigned to one Fog node. The mathematical ( (
𝒊𝒇 4𝐿/𝑓' 3 > 𝐶/𝑓' 35
equation of the proposed objective function is defined in Eq.
Fog node/VM is overloaded;
43.

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

Append it into overloaded fog node (OFN); IV. IMPLEMENTATION AND COMPARATIVE ANALYSIS
𝒆𝒍𝒔𝒆
(
𝒊𝒇 4𝐿/𝑓' 3<
(
𝐶/𝑓' 35 This segment discusses the simulation configuration for the
Fog node/VM is underloaded; devised approach and the comparative study of the suggested
technique with some of the previously existing techniques
Append it into underloaded fog node (UFN);
which shows the proficiency of the presented method.
𝒆𝒍𝒔𝒆
Fog node/VM is normalized; A. SIMULATION SETUP
Append it into normalized fog node (NFN); A computer with Intel i7, 4.20 GHz core and 8GB of RAM is
4. for 1: m used for our experimental simulations. As a simulator, the
𝒘𝒉𝒊𝒍𝒆(𝑈𝐹𝑁! = 𝑛𝑢𝑙𝑙) iFogSim has been used for creating a Fog-assisted
Sort the Fog nodes/VMs in the UFN list in ascending computation environment, which runs over the CloudSim
order based on their loads; [21]. For our experiments, two datacenters are considered: Fog
(
𝒊𝒇 4𝐿/𝑓' 3 = 𝐿(𝑓') )5 datacenter and Cloud datacenter, where several hosts and an
Resort the Fog nodes in the UFN list in array of Fog nodes/VMs are deployed. Each task is presumed
descending order based on the degree of to be executed by the corresponding VM in a space-shared
utilization; policy. Here, every experimental evaluation is repeated 10
Sort the Fog nodes/VMs in OFN in decreasing order
based on loads; times autonomously for each QoS scheduling parameter and
5. for 1: m the mean values are documented to avoid any ambiguities
𝒘𝒉𝒊𝒍𝒆(𝑂𝐹𝑁! = 𝑛𝑢𝑙𝑙 &&𝑈𝐹𝑁! = 𝑛𝑢𝑙𝑙) from the obtained results. The simulation time for each
Obtain the Task List to be offloaded from OFN; experiment is set to 30 minutes. For BLWJAYA, the
𝒇𝒐𝒓(𝑇$ ∈ 𝑇𝑎𝑠𝑘𝐿𝑖𝑠𝑡)
population size has been considered as 50 particles by the
authors and 100 has been set as the termination criteria
Append the task 𝑇$ into Unscheduled
Task List (UTL); (iterations) (𝑡&9R ). Table II summarizes the specifications of
6. for 1: m different categories of Fog nodes/VMs, and Table III shows
𝒘𝒉𝒊𝒍𝒆(𝑈𝐹𝑁! = 𝑛𝑢𝑙𝑙) the specifications of the tasks set. Two benchmark datasets
Estimate a sum of used resources and free resources were considered to estimate the effectiveness of the devised
of all the underloaded Fog nodes in the UFN list; baselines. First, the dataset [22] presents an ETC (estimated
Identify the compatibility between an underloaded time to compute) matrix which consists of several tasks and
Fog node with the tasks of the OFN list using the
cosine similarity (𝜃); machines in a row-wise and column-wise manner. Second,
7. for 1: m NASA’s three months of workload in terms of completion
𝒘𝒉𝒊𝒍𝒆(𝑈𝐹𝑁! = 𝑛𝑢𝑙𝑙 &&𝑈𝑇𝐿! = 𝑛𝑢𝑙𝑙) time is considered as a benchmark dataset [23]. Basically, it is
𝒇𝒐𝒓(𝑓' ∈ 𝑈𝐹𝑁 &&𝑇$ ∈ 𝑈𝑇𝐿) a high-performance multiprocessor scheduling data generated
Re-estimate the load of Fog nodes belong to UFN; from real-time and latency-sensitive applications.
TABLE II
( TECHNICAL SPECIFICATIONS
𝒊𝒇 4𝐶(𝑓' ) ≤ 𝐿/𝑓' 3 ∈ 𝑈𝐹𝑁 ≤ 𝐿(𝑓' )5
Allocate the task 𝑇$ according to the cosine Processing Speed, Bandwidth,
Category Storage
Cores MIPS MIPS
similarity (𝜃);
𝒆𝒍𝒔𝒆 High 6vCPUs 12500 1024 TB 102400
Medium 4vCPUs 6500 1024 GB 102400
Offload the task to the next compatible cluster for Low 2vCPUs 3250 1024 GB 102400
processing/next suitable underloaded VM in
Cloud;
TABLE III
8. LOOP
TASKS’ SPECIFICATIONS
for 1: n
# of # of Tasks’ length Latency
Nature Applications
Evaluate fitness values through Eq. 43; IoTs tasks (MI) rate
Identify the best and worst solutions; 0 – 5000
Healthcare 0.7
𝒊𝒇(𝑓𝑖𝑡𝑛𝑒𝑠𝑠 𝑣𝑎𝑙𝑢𝑒 < 𝐵𝐸𝑆𝑇$ ) (low)
Update the current fitness value as 𝐵𝐸𝑆𝑇$ ; 5001 – 10000
Tasks NASA 0.3
300 3000 (med)
end if; heterogeneity
High-
10001 – 15000
end for; performance 0.1
(high)
computing
9. for 1: n
Instantiate the updated placement of particles through the
position using Eq. 32 of LWJAYA; For ease of implementation, these two datasets have been
Position update by changing continuous new solutions to binary combined and represented in the form of an ETC matrix
solutions through Eq. 33 and 34; considering the twelve instances in the structure of 𝑢_𝑥_𝑡𝑚. In
end for;
this structure, 𝑢 refers to the instances that are uniformly
10. Till maximum criterion (𝑡*+, ); generated using a uniform distribution function, 𝑥 implicates
11. Output optimal solution;
the nature of consistency, i.e., inconsistent (𝑖), semi-consistent
(𝑠) and consistent (𝑐), 𝑡, and 𝑚 denote task and machine
heterogeneity with respect to high (ℎ) and low (𝑙),

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

respectively. Therefore, in view of this dynamic nature of has been considered in an increasing manner in the structure
tasks and computing nodes with associated latency, the of 𝑇𝑎𝑠𝑘𝑠 × 𝑁𝑜𝑑𝑒𝑠. Here, the nature of the considered tasks
structure 𝑢_𝑥_𝑡𝑚 is represented in the form of twelve is assumed to be non-preemptive and independent in nature.
instances, such as 𝑢_𝑐_ℎℎ, 𝑢_𝑐_ℎ𝑙, 𝑢_𝑐_𝑙ℎ, 𝑢_𝑐_𝑙𝑙, 𝑢_𝑖_ℎℎ, Table 4 shows the comparative values of service rate, resource
𝑢_𝑖_ℎ𝑙, 𝑢_𝑖_𝑙ℎ, 𝑢_𝑖_𝑙𝑙, 𝑢_𝑠_ℎℎ, 𝑢_𝑠_ℎ𝑙, 𝑢_𝑠_𝑙ℎ, and 𝑢_𝑠_𝑙𝑙. A utilization, latency rate, energy consumption and load
set of 3000 tasks which are dynamic in nature and 294 VMs balancing rate for each considered algorithm with that of the
proposed approach pertaining to all the instances considered.
TABLE IV
VALUES FOR SERVICE RATE, RESOURCE UTILIZATION AND LATENCY RATE

Instances
Parameters

Algorithms
QoS

u_c_hh u_c_hl u_c_lh u_c_ll u_i_hh u_i_hl u_i_lh u_i_ll u_s_hh u_s_hl u_s_lh u_s_ll

BLA 1.23E+ 1.28E+ 1.24E+ 1.21E+ 1.28E+ 1.30E+ 1.27E+ 1.26E+ 1.25E+ 1.28E+ 1.26E+ 1.24E+
[24] 06 06 06 06 06 06 06 06 06 06 06 05
9.26E+ 9.16E+ 9.27E+ 9.24E+ 9.39E+ 9.39E+ 9.37E+ 9.33E+ 9.34E+ 9.38E+ 9.35E+ 9.33E+
[25]
Service Rate

05 05 05 05 05 05 05 05 05 05 05 05
8.79E+ 8.82E+ 8.81E+ 8.80E+ 8.88E+ 8.86E+ 8.84E+ 8.82E+ 8.84E+ 8.88E+ 8.82E+ 8.80E+
[15]
05 05 05 05 05 05 05 05 05 05 05 05
3.45E+ 3.48E+ 3.44E+ 3.42E+ 3.51E+ 3.53E+ 3.50E+ 3.50E+ 3.48E+ 3.50E+ 3.46E+ 3.44E+
[11]
05 05 05 05 05 05 05 05 05 05 05 05
prop 1.12E+ 1.14E+ 1.11E+ 1.10E+ 1.16E+ 1.18E+ 1.15E+ 1.14E+ 1.14E+ 1.15E+ 1.11E+ 1.10E+
osed 05 05 05 05 05 05 05 05 05 05 05 05
BLA
12.98 12.73 12.62 12.47 13.43 13.16 13.2 13.002 13.76 13.65 13.61 13.42
[24]
Utilization
Resource

[25] 23.14 23.11 23.06 22.78 23.46 23.34 23.38 23 23.83 23.56 23.64 23.35
[15] 34.8 34 34.24 34.12 35.17 35.01 35.08 35 35.78 35.58 35.68 35.41
[11] 59.2 59.02 59.04 59 59.68 59.45 59.55 59.27 59.97 59.82 59.81 59.71
prop
96.29 96.12 96.2 96.02 96.79 96.67 96.71 96.34 97.14 96.89 96.72 96.45
osed
BLA
200.1 199.64 199.23 198.91 201.56 201.12 201.18 200.64 203.68 202.42 202.67 201.58
[24]
Latency Rate

[25] 188.36 187.64 187.59 187.03 189.35 188.23 187.82 187.02 189.58 188.46 188.2 187.68
[15] 192.41 191.48 191.62 190.62 190.87 189.25 189.76 188.49 189.92 188.16 188.21 187.62
[11] 183.5 182.4 183.61 182.01 183.82 182.62 182.84 182.62 184.47 183.3 183.02 182.48
prop
164.73 163.2 163.65 163.22 165.35 164.86 164.81 163.21 166.74 166.13 166.26 165.79
osed
[15] 308.48 308.08 308.14 307.99 313.41 313.28 313.26 313 310.68 310.27 310.33 310.11
Consumption

[26] 302.48 302.12 302.17 302 305.4 305.29 305.21 305.09 303.2 302.89 302.81 302.44
Energy

[11] 282.6 282.33 282.26 282.07 285.15 284.98 284.82 284.62 283.42 283.22 283.2 283
prop
262.82 262.46 262.36 262.19 264.48 264.12 264.02 263.96 263.76 263.43 263.33 263
osed
[15] 309.48 307.18 301.54 307.99 333.41 383.28 323.26 323 310.68 310.27 310.33 310.11
Balancing

[27] 300.48 305.22 300.37 312 315.4 315.29 315.21 325.09 313.2 312.89 312.81 312.44
Load

Rate

[11] 282.6 281.44 275.16 292.07 275.15 274.98 274.82 314.62 273.42 273.22 273.2 273
prop
261.83 272.56 282.86 272.19 254.48 254.12 254.02 283.96 253.76 253.43 253.33 253
osed

equally. When the number of tasks is less, the approach in [11]


B. COMPARATIVE STUDY FOR SERVICE TIME and the proposed approach produce approximate results due to
The BLA [24] results in increasing the service time because of the implementation of load balancing approaches. It is evident
not considering the load balancing strategy. In [15], machines’ from Figure 3 that for a wide range of tasks and with
heterogeneity has not been considered. In the Fog-assisted heterogeneous Fog nodes, the proposed approach offers
environment, the heterogeneity for both tasks and Fog nodes/ service time improvement of 51% to [24], 88% to [25], 87%
VMs holds significance due to the disparate specifications. to [15] and 67% to [11], thereby outperforming others because
Moreover, it does not calculate the load for each Fog node of the load balancing strategy.
thereby resulting in a lower service time. The algorithm used
in [15] applies a heuristic-based algorithm that does not C. COMPARATIVE STUDY FOR DEGREE OF
generate optimal solutions. When the number of tasks to be UTILIZATION
executed is less, the approaches [15, 25] perform nearly In [15, 24], the authors have not considered the load balancing
approaches. Hence, the utilization is low due to the

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

imbalanced state of the Fog nodes. In [11], the degree of delay does not hold significance. In comparison, the proposed
utilization has not been considered despite implementing a approach presents a notable improvement in latency rate over
load balancing approach. It is shown in Figure 4 that the the existing approaches. In our approach, the average latency
proposed approach alleviates the degree of utilization due to is considered due to the latency caused by three layers with
the minimization of service time as well as the load balancing different transmission and propagation times. The
when there is a spike in load in contrast to other available comparative analysis of the latency rate of the proposed
approaches. On average, it can be seen that the proposed approach with some existing approaches has been shown in
approach utilizes 83.5% more resources as compared to [24], Figure 5 from which it is evident that on average the proposed
73.3% more as compared to [25], 61.7% more as compared to approach has 18.02%, 12.39%, 12.01% and 9.97% lesser
[15] and 37.1% more than [11]. latency than [24], [25], [15] and [11] respectively.

BLA[24] [25] [15] [11] Proposed


BLA[24] [25] [15] [11] Proposed
1.50E+06
210
180
1.00E+06
150
Service Rate (ms)

Latency Rate (ms)


120
5.00E+05
90
60
0.00E+00
30
u_i_hl
u_c_hh

u_c_lh

u_i_hh

u_i_lh

u_s_hh

u_s_lh
u_c_hl

u_c_ll

u_i_ll

u_s_hl

u_s_ll

u_i_hh
u_c_ll

u_i_hl
u_c_hh

u_c_lh

u_i_lh

u_s_hh

u_s_lh
u_c_hl

u_i_ll

u_s_hl

u_s_ll
3000×294

3000×294
FIGURE 3. Comparative performance analysis graph for Service Time.

FIGURE 5. Comparative performance analysis graph for Latency Rate.


BLA[24] [25] [15] [11] Proposed
E. COMPARATIVE STUDY FOR ENERGY
100
CONSUMPTION
80 [15] [26] [11] Proposed
330
60
Utilization (%)

300
270
Energy Consumption

40 240
210
20 180
(KJ)

150
120
0 90
u_i_hh
u_c_ll

u_i_hl

u_s_ll
u_c_hh

u_c_lh

u_i_lh

u_s_hh

u_s_lh
u_c_hl

u_i_ll

u_s_hl

60
30
0
u_c_lh

u_i_hh
u_c_hl

u_c_ll

u_i_hl

u_s_ll
u_c_hh

u_i_lh

u_s_hh

u_s_lh
u_i_ll

u_s_hl

3000×294

FIGURE 4. Comparative performance analysis graph for Resource


Utilization. 3000×294

FIGURE 6. Comparative performance analysis graph for Energy


D. COMPARATIVE STUDY FOR LATENCY RATE Consumption.
In [15, 24], the delay constraint has not been taken into In the presented approach, energy consumption is evaluated
consideration. When the algorithms of both the approaches while performing major tasks in the Fog layer, i.e., task
have been simulated for latency constraint, both have been scheduling and load balancing. The outputs of the simulation
found less effective due to the consideration of heuristic have been contrasted with other available algorithms [11, 15,
approaches which give not-so-optimal results in comparison 26]. In [26], the rate of energy consumption increases when
to metaheuristic approaches, whereas [11] considers the delay there are a few numbers of tasks. The same applies to [11]
for thirty tasks with sixty-four Fog nodes. However, with a where the energy consumption is evaluated for thirty tasks by
smaller number of requests and Fog nodes, load balancing or sixty-four Fog nodes. Due to the even distribution of loads

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

among Fog nodes, [11] accomplishes a great deal in maximum number of iterations. Consequently, the total time
conserving energy. It is apparent from Figure 6 that our required by the JAYA algorithm is ΟR𝑔𝑡P + (𝑡&9R × 𝑔)T.
proposed approach consumes 15.24%, 13.30% and 7.2% less For instance, in LWJAYA, the fundamental equation of
energy than [15], [26] and [11] respectively, which is a JAYA is supplemented with a weight factor. Thus, it requires
considerable reduction in energy consumption over others. ΟR𝑔𝑡P + (𝑡&9R × 𝑔)T time as well.
Due to the reduction in service time, latency, and
maximization of utilization, the degree of energy consumption H. STATISTICAL ANALYSIS
is significantly ameliorated. To evaluate the statistical accuracy and effectiveness of the job
scheduling algorithms, statistical analysis was done. The tests
F. COMPARATIVE STUDY FOR LOAD BALANCING for ordinal parameters were compared using Friedman's test,
RATE
which is intended to identify significant differences between
In [27], a dynamic resource allocation has been proposed in the techniques [30]. Using Friedman's overall rating, the
which loads steadily increase among Fog nodes when the significance of JAYA and BLWJAYA were graded using the
number of requests rises. In [15], the load balancing procedure generated values. The following definition applies to the null
is not as efficient due to the use of a heuristic approach. In [11], hypothesis H0: H0 indicates that every job scheduling process
authors calculated the current utilization of each Fog node runs flawlessly.
based on a dynamic threshold value. However, it does not To validate the model, the authors investigated six methods
calculate the compatibility between underloaded Fog nodes and five quality characteristics. Based on its expected
and overloaded tasks for the migration. In contrast, our accuracy, each approach was given a rank between 1 and 𝐴.
approach evaluates the current load of each Fog node and its Each evaluation metric's results were utilized to rank each
total resource availability to compute the compatibility for the
technique. The approach with the highest average total sum of
migration. Figure 7 shows the comparative analysis of the rate
efficiency matrices is rated 6, while the method with the
of load balancing for the proposed approach with some
lowest average total sum of evaluation metrics (e.g.,
existing approaches [11, 15, 27]. Approximately, the proposed makespan) is rated 1. For a quality indicator, multiple
approach has 15.8%, 15% and 4.8% lesser load balancing rates approaches produce the same result, hence the rating was
than [15], [27] and [11] respectively.
calculated by averaging the algorithm's ratings. Each
assessment parameter received a corresponding rating for all
[15] [27] [11] Proposed procedures. The degree of confidence was set at 0.10 for this
trial. To find the average score 𝑅c of the 𝑞+% algorithm, Eq. 48
220
200 is applied:
HG& @/ +@+9A P9(:< @d+9!(;B d_ +%; c &> 9AF@P!+%&
Load Balancing Rate

180
160
𝑅c = (48)
.@+9A (G&d;P @/ =;P/@P&9(Q; =9P9&;+;P<
140 The calculation for Friedman statistics is given in Eq. 49.
120 (bD")e A
(%)

100 𝐹E = b(fD")De
G
A (49)
80 G
"#b f(f>")A
60 where 𝑋E# = f(f>") ’∑g!5" 𝑅c # − “, the total number of
40 h
20 approaches is 𝐴, and the overall evaluation matrix is 𝐸.
0 The degree of freedom determines the distribution of the
u_i_hl
u_c_hh

u_c_lh

u_i_hh

u_i_lh

u_s_hh

u_s_lh
u_c_hl

u_c_ll

u_i_ll

u_s_hl

u_s_ll

Friedman statistics 𝐹E , i.e., the F-distribution using (𝐴 − 1)


and (𝐴 − 1)(𝐸 − 1). Five metrics for performance and six
3000×294 methodologies have degrees of independence ranging from 5
to 10. Consequently, 3.29740 is the critical value for 0.10 [31].
FIGURE 7. Comparative performance analysis graph for Load balancing The null hypothesis H0 will be accepted if the size of 𝐹E is
Rate.
smaller than the critical value; if not, the hypothesis will be
rejected. With six techniques and five evaluation measures,
G. COMPUTATIONAL ANALYSIS
the 𝐹E value of 5.44 is more than the critical value of 3.29740.
The ability of a metaheuristic algorithm to tackle NP-hard
Since the null hypothesis contradicts the evaluated premise, it
problems with reduced temporal complexity determines its
ought to be rejected. Consequently, every scheduling system
success rate. Determining the computational complexity of a
that has been studied has a significantly varied behaviour.
metaheuristic algorithm is necessary to evaluate its robustness
Holm's test is utilized as the post hoc test to determine
and efficacy. The load balancing method takes Ο(𝑚 × 𝑡) time
whether the efficacy of the proposed algorithm is statistically
to complete. The time required for determining a particle's
superior to that of other compared algorithms. In this test, the
fitness value using the traditional JAYA technique is
null hypothesis H0 is considered because all the compared
Ο(𝑔 × 𝑡P ), where 𝑔 represents a group of particles and 𝑡P is
procedures are equivalent. Using Eq. 50, this test calculates
the time required to determine a single fitness score for a
the 𝑧 value, which is then utilized to retrieve the probability 𝑝
particle. Particle position updates in JAYA take Ο(𝑡&9R × 𝑔)
from the normal distribution table. 𝑝! value and 𝛼•𝐴 − 𝑖 are
time, in which (𝑡&9R × 𝑔) is the time needed for the

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

compared for probability. Table V clearly shows that for all and schedule tasks to reduce the service rate of latency-
the compared methodologies, the presumed hypothesis is sensitive applications is needed.
false. Consequently, it is discovered that the recommended
approach performs statistically better than the compared CONFLICT OF INTEREST
methods. The authors of this research paper confirm that they have no
K DK conflict of interest to disclose.
𝑧 = &, H (50)
Hb
f(f>")
Where 𝑆𝐸 = ~ g-
REFERENCES
[1] H. Tyagi and R. Kumar, “Cloud computing for iot. In: Internet of
TABLE V
Things (IoT),” Springer, Berlin, pp. 25–41, 2020.
VALUES OF HOLM’S TEST
[2] A. Mahapatra, K. Mishra, R. Pradhan, and S. K. Majhi, “Next
Compared 𝜶V Generation Task Offloading Techniques in Evolving Computing
𝒛-value 𝒑-value 𝑨−𝒊 Hypothesis
Approaches Paradigms: Comparative Analysis, Current Challenges, and Future
[11] -2.1748 0.03309 0.02 Rejected Research Perspectives,” Archives of Computational Methods in
[15] -2.5827 0.02582 0.035 Rejected Engineering, pp. 1-70, 2023, doi: https://doi.org/10.1007/s11831-023-
[24] -2.01 0.05879 0.34 Rejected 10021-2.
[25] -2.01 0.05879 0.34 Rejected [3] Y. Sahni, J. Cao, S. Zhang, and L. Yang, “Edge mesh: A new paradigm
[26] -3.1258 0.00096 0.08 Rejected to enable distributed intelligence in internet of things,” IEEE access,
[27] -3.1334 0.00070 0.05 Rejected vol. 5, pp. 16441–16458, 2017.
[4] P. Cong, J. Zhou, L. Li, K. Cao, T. Wei, and K. Li, “A survey of
V. CONCLUSIONS AND FUTURE SCOPES hierarchical energy optimization for mobile edge computing: A
perspective from end devices to the cloud,” ACM Comput
The amount of data generated by smart IoT devices is Surv(CSUR), vol. 53(2), pp.1–44 11, 2020.
increasing at a rapid pace, and along with it are the users and [5] A. Mahapatra, K. Mishra, S. K. Majhi, and R. Pradhan, “Latency-
their endless service requests. There is some delay in the aware Internet of Things Scheduling in Heterogeneous Fog-Cloud
communication between the user IoT devices and the Cloud Paradigm,” In 2022 3rd International Conference for Emerging
Technology (INCET), pp. 1-7, 2022, doi:
datacenter due to their great distance from one another. To 10.1109/INCET54531.2022.9824613.
take advantage of the excess storage and processing capacity [6] R. Mahmud, K. Ramamohanarao, and R. Buyya, “Application
available to Cloud-based systems and Fog-assisted nodes, it is management in fog computing environments: A taxonomy, review and
critical to offload such massive amounts of data. This latency future directions,” ACM Comput Surv, vol. 53(4), pp. 1–43, 2020.
[7] S. Ghosh, A. Mukherjee, S. K. Ghosh, and R. Buyya, “Mobi-iost:
can be minimized by adding a Fog layer between the Cloud mobility-aware cloud-fog-edge-iot collaborative framework for time-
and the IoT layer. Furthermore, a large amount of energy is critical applications,” IEEE Transactions on Network Science and
used by the dispersed Fog servers and Cloud datacenters for Engineering, vol. 7(4), pp. 2271-2285, 2019.
these intensive computations. So, to address these issues the [8] A. Hazra, M. Adhikari, T. Amgoth, and S. N. Srirama, “Joint
computation offloading and scheduling optimization of IoT
proposed approach considered task characteristics, associated applications in fog networks,” IEEE Transactions on Network Science
QoS constraints, and the heterogeneity of both increasing tasks and Engineering, vol. 7(4), pp. 3266-3278, 2020.
and Fog nodes/VMs to leverage the capability of [9] P. Tam, S. Math, and S. Kim, “Optimized Multi-Service Tasks
computationally poor and resource-limited nodes. It aims to Offloading for Federated Learning in Edge Virtualization,” IEEE
Transactions on Network Science and Engineering, vol. 9(6), pp.
minimize the caused latency, energy consumption while 4363-4378, 2022.
computation, service time for end user’s satisfaction, and load [10] S. Long, Y. Zhang, Q. Deng, T. Pei, J. Ouyang, and Z. Xia, “An
balancing rate while increasing the degree of utilization for the Efficient Task Offloading Approach Based on Multi-objective
service provider. Fuzzy logic has been utilized to classify the Evolutionary Algorithm in Cloud-Edge Collaborative
Environment,” IEEE Transactions on Network Science and
tasks depending on the requirements for offloading into Engineering, 2022, doi: 10.1109/TNSE.2022.3217085.
different target layers to reduce the service time along with the [11] A. Mahapatra, K. Mishra, S. K. Majhi, and R. Pradhan, “EFog-IoT:
latency. To ensure that tasks are assigned to the most Harnessing Power Consumption in Fog-Assisted of Things,” In 2022
appropriate compute nodes, a binary LWJAYA has been IEEE Region 10 Symposium (TENSYMP), pp. 1-6, 2022, doi:
10.1109/TENSYMP54529.2022.9864457.
implemented. A threshold-based load balancing strategy has [12] K. Mishra, and S. Majhi, “A state-of-art on cloud load balancing
been proposed to distribute loads evenly among Fog nodes and algorithms,” International Journal of computing and digital systems,
reduce task migration. Additionally, a compatibility-based vol. 9(2), pp. 201-220, 2020.
task offloading method is created to move the tasks (requests) [13] K. Mishra, and S. K. Majhi, “A binary Bird Swarm Optimization based
load balancing algorithm for cloud computing environment,” Open
to the underutilized, most compatible Fog nodes which are Computer Science, vol. 11(1), pp. 146-160, 2021.
based on cosine similarity. The obtained results were accessed [14] V. Jafari, and M. H. Rezvani, “Joint optimization of energy
through a series of experiments and validated against some of consumption and time delay in IoT-fog-cloud computing
the existing approaches. It is evident that the proposed environments using NSGA-II metaheuristic algorithm,” Journal of
Ambient Intelligence and Humanized Computing, pp. 1-24, 2021, doi:
approach outperforms other baselines with an approximate https://doi.org/10.1007/s12652-021-03388-2.
improvement of 26.2%, 12%, 7%, 8.63% and 6% for Resource [15] K. Mishra, G. N. Rajareddy, U. Ghugar, G. S. Chhabra, and A. H.
utilization, Service rate, Latency rate, Energy consumption Gandomi, “A Collaborative Computation and Offloading for
and Load balancing rate. Compute-intensive and Latency-sensitive Dependency-aware Tasks in
Dew-enabled Vehicular Fog Computing: A Federated Deep Q-
However, because task dependency is not considered in the Learning Approach,” IEEE Transactions on Network and Service
proposed work, a model that can depict task interdependence Management, 2023, doi: 10.1109/TNSM.2023.3282795.

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

[16] C. Pradhan and C. N. Bhende, “Online load frequency control in wind iPSC/860,” In workshop on job scheduling strategies for parallel
integrated power systems using modified Jaya processing, Springer, Berlin, Heidelberg, pp. 337-360, doi:
optimization,” Engineering Applications of Artificial Intelligence, https://doi.org/10.1007/3-540-60153-8_38.
vol. 77, pp. 212-228, 2019. [24] S. S. Tripathy, D. S. Roy, and R. K. Barik, “M2FBalancer: A mist-
[17] K. Mishra, J. Pati, and S. K. Majhi, “A dynamic load scheduling in assisted fog computing-based load balancing strategy for smart
IaaS cloud using binary JAYA algorithm,” Journal of King Saud cities,” Journal of Ambient Intelligence and Smart Environments, vol.
University-Computer and Information Sciences, vol. 34(8), pp. 4914- 13(3), pp. 219-233, 2021.
4930, 2020. [25] H. O. Hassan, S. Azizi, and M. Shojafar, “Priority, network and
[18] S. Ningning, G. Chao, A. Xingshuo, and Z. Qiang, “Fog Computing energy-aware placement of IoT-based application services in fog-
Dynamic Load Balancing Mechanism Based on Graph cloud environments,” IET Communications, vol. 14(13), pp. 2117–
Repartitioning,” China Communications, pp. 156–164, 2016. 2129, 2020.
[19] T. Prakash, V. P. Singh, S. Singh, and S. Mohanty, “Binary Jaya [26] X. Pham, N. D. Man, N. D. T. Tri, N. Q. Thai, and E. Huh, “A cost
algorithm based optimal placement of phasor measurement units for and performance effective approach for task scheduling based on
power system observability,” Energy Conversion and Management, collaboration between cloud and fog computing,” Int. J. Distrib. Sens.
vol. 140, pp. 34-35, 2017. Netw., vol. 13 pp. 1–16, 2017.
[20] K. Mishra, R. Pradhan, and S. K. Majhi, “Quantum-inspired binary [27] S. Bitam, S. Zeadally, and A. Mellouk, “Fog computing job scheduling
chaotic salp swarm algorithm (QBCSSA)-based dynamic task optimization based on bees swarm,” Enterprise Information
scheduling for multiprocessor cloud computing systems,” J Systems, vol. 12(4), pp. 373-397, 2018.
Supercomput, vol. 77, pp. 10377–10423, 2021. [28] C. Chakraborty, K. Mishra, S. K. Majhi, and H. K. Bhuyan,
[21] R. N. Calheiros, R. Ranjan, A. Beloglazov, C. A. De Rose, and R. “Intelligent Latency-aware tasks prioritization and offloading strategy
Buyya, “CloudSim: a toolkit for modeling and simulation of cloud in Distributed Fog-Cloud of Things,” IEEE Transactions on Industrial
computing environments and evaluation of resource provisioning Informatics, vol. 19(2), pp. 2099-2106, 2022.
algorithms,” Software: Practice and experience, vol. 41(1), pp. 23-50, [29] S. Iftikhar, M. M. M. Ahmad, S. Tuli, D. Chowdhury, M. Xu, S. S.
2011. Gill, and S. Uhlig, “HunterPlus: AI based energy-efficient task
[22] T. D. Braun, H. J. Siegel, N. Beck, L. L. Bölöni, M. Maheswaran, A. scheduling for cloud–fog computing environments,” Internet of
I. Reuther, and R. F. Freund, June 2001, “A comparison of eleven Things, vol. 21, pp. 100667, 2023.
static heuristics for mapping a class of independent tasks onto [30] J. Demšar, “Statistical comparisons of classifiers over multiple data
heterogeneous distributed computing systems,” Journal of Parallel and sets,” Journal of Machine learning research, vol. 7, pp. 1-30, 2006.
Distributed computing, vol. 61(6), pp. 810-837, doi: [31] F Distribution Table, 2018 Mar18. Retrieved from
https://doi.org/10.1006/jpdc.2000.1714. http://www.socr.ucla.edu/applets.dir/f_table.html.
[23] D. G. Feitelson, and B. Nitzberg, April 1995, “Job characteristics of a
production parallel scientific workload on the NASA Ames

ABHIJEET MAHAPATRA has obtained his B. KAUSHIK MISHRA is currently working as an


Tech. degree in Computer Science and Assistant Professor in the Department of Computer
Engineering (CSE) from BPUT, Rourkela, Science and Engineering (CSE) at Gandhi Institute
Odisha, India, and an M. Tech. degree in CSE of Technology and Management (GITAM)
from Veer Surendra Sai University of Technology (Deemed to be University), Visakhapatnam,
(VSSUT), Burla, Sambalpur, India. Mr. Andhra Pradesh, India. Dr. Mishra obtained his Ph.
Mahapatra is currently pursuing his Ph. D. degree D. degree from VSSUT, Burla in 2021. His
in CSE at VSSUT, Burla, Sambalpur, India. His research area includes cloud computing and fog
research area includes task scheduling and load computing. He has publications in various journals
balancing with IoT management in Cloud and Fog of repute and conference proceedings. He has
computing. He has published three journals and two conference papers. received two best paper awards in two conferences held at NIT, Agartala,
India and ITER, Bhubaneswar, India in the year 2020.
SANTOSH K. MAJHI is currently working as an
Associate Professor in the Department of ROSY PRADHAN is currently working as an
Computer Science and Information Technology at Assistant Professor in the Department of Electrical
Guru Ghasidas Viswavidyalaya, Bilaspur, Engineering, VSSUT, Burla, Sambalpur, India. Dr.
Chhattisgarh, India. Dr. Majhi has completed his Pradhan received her B. Tech. from the College of
B. Tech. from the VSSUT, Burla, Sambalpur, Engineering and Technology (CET), Bhubaneswar
India, M. Tech. in CSE from the Utkal University, in 2010, M. Tech. in Control and Automation from
Bhubaneswar and Ph. D. in Computer and the National Institute of Technology (NIT),
Information Systems from the Sri Sri University, Rourkela, India, and a Ph. D. in Control System
Cuttack, India. His research area includes cloud computing, security Engineering from VSSUT, Burla, India in the year
analytics, data mining, soft computing, and AI. He has published more than 2019. Her research area includes fractional-order
seventy numbers of papers in various journals of repute and conference control systems and Computational Intelligence, AI, etc. Currently, she has
proceedings. He is currently an IEEE Senior Member. Dr. Majhi is the more than twenty papers that include various journals of repute and
recipient of the Young Scientist Award from VIFRA, India, the Young IT conference proceedings.
Professional Award from the Computer Society of India and the Best
Faculty Award from Veer Surendra Sai University of Technology, Burla, D. CHANDRASEKHAR RAO currently works at
Sambalpur, India. the Department of Information Technology, Veer
Surendra Sai University of Technology. D
Chandrasekhar does research in Theory of
Computation, Computer Security and Reliability
and Computer Communications (Networks). Their
current project is 'Multi-robot Navigation'.

VOLUME XX, 2017 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2024.3357122

SANDEEP K. PANDA currently works as a


professor in the Faculty of Science & Technology at
ICFAI University, Hyderabad, Telangana, India.
His expertise is in Artificial Intelligence, Data
Science, IoT, Blockchain Technology and
Metaverse. He has published more than 60 papers in
top-tier conferences and various journals of repute
bearing more than 1000 citations with his h-index
being 18.

8 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4

You might also like