0% found this document useful (0 votes)
59 views

Dynamic Resource Allocation

Uploaded by

melazem rym
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Dynamic Resource Allocation

Uploaded by

melazem rym
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Hindawi

Wireless Communications and Mobile Computing


Volume 2018, Article ID 6421607, 15 pages
https://doi.org/10.1155/2018/6421607

Research Article
Dynamic Resource Allocation for Load
Balancing in Fog Environment

Xiaolong Xu ,1,2,3,4 Shucun Fu,1,2 Qing Cai,1,2 Wei Tian,1,2 Wenjie Liu ,1,2
Wanchun Dou ,3 Xingming Sun ,1,2 and Alex X. Liu1,4
1
School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China
2
Jiangsu Engineering Centre of Network Monitoring, Nanjing University of Information Science and Technology, Nanjing, China
3
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
4
Department of Computer Science and Engineering, Michigan State University, East Lansing, MI, USA

Correspondence should be addressed to Wanchun Dou; [email protected]

Received 6 December 2017; Accepted 19 March 2018; Published 26 April 2018

Academic Editor: Deepak Puthal

Copyright © 2018 Xiaolong Xu et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Fog computing is emerging as a powerful and popular computing paradigm to perform IoT (Internet of Things) applications, which
is an extension to the cloud computing paradigm to make it possible to execute the IoT applications in the network of edge. The
IoT applications could choose fog or cloud computing nodes for responding to the resource requirements, and load balancing is
one of the key factors to achieve resource efficiency and avoid bottlenecks, overload, and low load. However, it is still a challenge
to realize the load balance for the computing nodes in the fog environment during the execution of IoT applications. In view of
this challenge, a dynamic resource allocation method, named DRAM, for load balancing in fog environment is proposed in this
paper. Technically, a system framework for fog computing and the load-balance analysis for various types of computing nodes
are presented first. Then, a corresponding resource allocation method in the fog environment is designed through static resource
allocation and dynamic service migration to achieve the load balance for the fog computing systems. Experimental evaluation and
comparison analysis are conducted to validate the efficiency and effectiveness of DRAM.

1. Introduction In the fog environment, the routers are the potential


physical servers which could provision resources for the fog
In recent years, the Internet of Things (IoT) has attracted services at the edge of the network [7, 8]. The routers could
attention from both industry and academia, which is benefi- enhance the performance of computation and storage, which
cial to humans’ daily lives. The data extracted from the smart could be fully utilized as the computing nodes. In the big
sensors are often transmitted to the cloud data centers and the data era, there are different performance requirements for
applications are generally executed by the processors in the the IoT applications, especially for the real-time applications;
data centers [1]. The cloud computing paradigm is efficient thus, such applications choose the edge computing nodes
in provisioning computation and storage resources for the as a priority to host [9, 10]. In the fog environment, the
IoT applications, but the ever-increasing amount of resource users could access and utilize the computation, storage, and
requirements of the IoT applications leads to the explosively network resources, like the way the customers use the cloud
increased energy consumption and the performance degra- resources, and the virtualized technology is also applicative to
dation of the computing nodes due to data transmission provision the on-demand resources dynamically [11]. The IoT
and computing node migration; thus, how to perform IoT applications could be performed by the fog computing nodes
applications becomes an urgent issue [2–6]. Fog computing and the physical resources deployed in the remote cloud data
extends the computing process to the edge of the network center. The resource allocation for the IoT applications should
rather than performing the IoT applications in the cloud take into account both the centralized and the geodistributed
platforms. computing nodes, and the resource schedulers and managers
2 Wireless Communications and Mobile Computing

should choose the appropriate computing nodes to host the 2.1. System Framework for Fog Computing. Fog computing is
fog services combined in the IoT applications through the a new computing paradigm, which sufficiently leverages the
design of resource allocation strategies. decentralized resources through the fog and cloud environ-
Resource allocation and resource scheduling are the key ments, to provision the computation and storage services for
technologies to manage the data centers, which contribute a the IoT applications. Fog computing extends the data process
great deal to lowering the carbon emission, improving the and data storage between the smart sensors and the cloud
resource utilization, and obtaining load balancing for the data centers. Some of the tasks from the IoT applications
data centers [12–14]. In the cloud environment, the main could be processed in the fog rather than performing all the
goal of resource allocation is to optimize the number of tasks in the cloud environment. The virtualized technology
active physical machines (PMs) and make the workloads could be employed to improve the resource usage in the fog
of the running PMs distributed in a balanced manner, to environment.
avoid bottlenecks and overloaded or low-loaded resource
usage [14–17]. In the fog environment, resource allocation Figure 1 shows a hierarchical framework for computing
becomes more complicated since the applications could be tasks from IoT applications in the fog environment. There
responded to by the computing nodes both in fog and in are four layers in this framework, that is, the IoT application
clouds. The computing nodes in the fog are distributed layer, the service layer, the fog layer, and the cloud layer. The
dispersedly in the network of edge, while the computing IoT application contains a large amount of service require-
nodes in the cloud are distributed in a centralized data ments that need to be responded to by selecting appropriate
center. The resource requirements of the IoT applications for computing nodes according to the time urgency and the
the computing nodes are various as the applications have resource amount from fog and cloud. The fog layer consists of
different demands of computing power, storage capacity, and the edge computing nodes and the intermediate computing
bandwidth. Therefore, it is necessary to undergo resource nodes. The services with the highest time urgency and less
allocation for the dynamic resource requirements of the IoT computation density can be calculated by the edge computing
applications, to achieve the goal of load balancing. nodes in the fog layer, and the less urgent tasks could choose
With the above observations, it is still a challenge to intermediate computing nodes for execution. The cloud
realize the load balance for the computing nodes in the fog layer is appropriate to hosting the loose-fitting tasks with
environment during the execution of IoT applications. In high-density computation and huge-volume storage which
view of this challenge, a dynamic resource allocation method, often demand a large amount of physical resources. The
named DRAM, for load balancing in fog environment is intermediate computing nodes could be the routers for data
proposed in this paper. Specifically, our main contributions transmission, the edge computing nodes could be the mobile
are threefold. Firstly, we present a system framework for devices or sensors, and the computing nodes in the cloud are
IoT applications in fog environment and conduct the load- the PMs.
balance analysis for various types of computing nodes. Fog computing is useful for IoT applications which
Then, a corresponding resource allocation method in the fog combine many fog services. The fog services cover all the
environment is designed through static resource allocation procedures of the data extraction, data transmission, data
and dynamic service migration to achieve the load balance for storage, and service execution from the IoT applications.
the fog computing systems. Finally, adequate experimental
analysis is conducted to verify the performance of our Definition 1 (fog service). The services requested by the IoT
proposed method. applications are available to be performed in the fog and
The rest of this paper is organized as follows. In Section 2, remote cloud data centers, denoted as 𝑆 = {𝑠1 , 𝑠2 , . . . , 𝑠𝑁},
formalized concepts and definitions are presented for load- where 𝑁 is the number of fog services generated by the IoT
balance analysis in the fog environment. Section 3 elaborates applications.
the proposed resource allocation method DRAM. Section 4
illustrates the comparison analysis and performance evalua- The PMs in the remote cloud data centers, the intermedi-
tion. Section 5 summarizes the related work, and Section 6 ate computing nodes, and the edge computing nodes in the
concludes the paper and presents the prospect for the future fog environment are all available to be leveraged to provision
work. physical resources for the fog services. Suppose there are
𝑀 computing nodes in the fog and cloud environment,
2. Preliminary Knowledge denoted as 𝑃 = {𝑝1 , 𝑝2 , . . . , 𝑝𝑀}. The virtualized technology
has been widely applied in the cloud environment, which
In this section, a fog computing framework for IoT applica- is also adaptive in the fog environment to measure the
tions is designed and the load-balance analysis is conducted resource capacity of all the computing nodes and the resource
as well. requirement of the fog services.
To facilitate dynamic resource allocation for load balanc-
ing in fog environment, formal concepts and load-balance Definition 2 (resource capacity of computing nodes). For all
analysis are presented in this section. Key notations and the computing nodes in the fog and cloud, their resource
descriptions used in this section are listed in the section capacities are quantified as the number of resource units,
named “Key Terms and Descriptions Involved in Resource and each resource unit contains various physical resources,
Scheduling in Fog Environment.” including CPU, memory, and bandwidth.
Wireless Communications and Mobile Computing 3

... IoT application layer

... Service layer

... ...

Edge Computing Nodes R


Resource Units Fog layer

...... ...

Intermediate Computing Nodes Resource Units

... VM Instances

Cloud layer
.. .. .. Physical
...
. . . Resource

Figure 1: Fog computing framework for IoT applications.

Definition 3 (resource requirement of 𝑠𝑛 ). The corresponding With the judgement of fog service distribution, the
resource requirement of the 𝑛th fog service 𝑠𝑛 could be resource utilization for the 𝑚th computing node 𝑝𝑚 at time 𝑡
quantified as the time requirements, the resource type, is calculated by
and the resource amount to perform 𝑠𝑛 , denoted as 𝑟𝑛 =
{𝑠𝑡𝑖𝑚𝑛 , 𝑑𝑢𝑡𝑖𝑚𝑛 , 𝑡𝑦𝑝𝑒𝑛 , 𝑐𝑜𝑢𝑛 }, where 𝑠𝑡𝑖𝑚𝑛 , 𝑑𝑢𝑡𝑖𝑚𝑛 , 𝑡𝑦𝑝𝑒𝑛 , and 1 𝑁 𝑚
𝑟𝑢𝑚 (𝑡) = ∑ 𝐼 (𝑡) ⋅ 𝑟𝑛 , (2)
𝑐𝑜𝑢𝑛 represent the request start time, the duration time, the 𝑐𝑚 𝑛=1 𝑛
resource type, and the requested amount of 𝑠𝑛 , respectively.
where 𝑐𝑚 is the capacity of the computing node 𝑝𝑚 .
Note that the resource requirements in this paper are According to the service distribution, the number of
measured by the amount of resource units. For example, services deployed on 𝑝𝑚 at time instant 𝑡 is calculated by
in the cloud data centers, the resource units are the VM 𝑁
instances, and the customers often rent several VM instances 𝑛𝑠𝑚 (𝑡) = ∑ 𝐼𝑛𝑚 (𝑡) . (3)
to host one application. 𝑛=1

The load-balance variance should be specified for each


2.2. Load Balancing Model in Fog Environment. Fog com- type of computing node. Suppose there are 𝑊 types of com-
puting paradigm releases the load distribution in the cloud puting nodes in cloud and fog environment for performing
environment. Due to the diversity of execution duration the fog services.
and specifications for the computing nodes in the fog, the The load-balance variance is closely relevant to the
resources could not be fully utilized. In the fog environment, resource utilization. To calculate the resource utilization, the
we try to achieve load balancing for the computing nodes to employed amount of computing nodes with type 𝑤 at the time
avoid low utilization or overload of the computing nodes. instant 𝑡 is calculated by
Let 𝐼𝑛𝑚 (𝑡) be the binary variable to judge whether 𝑠𝑛 (1 ≤
𝑀
𝑛 ≤ 𝑁) is assigned to the computing node 𝑝𝑚 (1 ≤ 𝑚 ≤ 𝑀)
𝑎𝑤 (𝑡) = ∑ 𝑓𝑚 (𝑡) ⋅ 𝛽𝑤𝑚 , (4)
at time instant 𝑡, which is calculated by 𝑚=1

{1, if 𝑠𝑛 is assigned to 𝑝𝑚 , where 𝛽𝑤𝑚 is a flag to judge whether 𝑝𝑚 is a type 𝑤 of


𝐼𝑛𝑚 (𝑡) = { (1) computing nodes, which is described in (5), and 𝑓𝑚 (𝑡) is used
0, otherwise.
{ to judge whether 𝑝𝑚 is empty at 𝑡, presented in (6).
4 Wireless Communications and Mobile Computing

{1, if 𝑝𝑚 is a 𝑤th type computing node, 3. A Dynamic Resource Allocation Method for
𝛽𝑤𝑚 = { (5) Load Balancing in Fog Environment
0, otherwise.
{
In this section, we propose a dynamic resource allocation
{1, if 𝑛𝑠𝑚 > 0, method, named DRAM, for load balancing in fog environ-
𝑓𝑚 (𝑡) = { (6) ment. Our method aims to achieve high load balancing for
0, otherwise.
{ all the types of computing nodes in the fog and the cloud
platforms.
The resource utilization for the computing nodes with
𝑤th type at time 𝑡 is calculated by
3.1. Method Overview. Our method consists of four main
1 𝑀 𝑁 steps, that is, fog service partition, spare space detection for
𝑅𝑈𝑤 (𝑡) = ∑ ∑ 𝐼𝑚 (𝑡) ⋅ 𝑟𝑢𝑚 (𝑡) ⋅ 𝑓𝑚 (𝑡) . (7) computing nodes, static resource allocation for fog service
𝑎𝑤 (𝑡) 𝑚=1 𝑛=1 𝑛 subset, and the load-balance driven global resource alloca-
tion, as shown in the section named “Specification of our
Definition 4 (variance of load balance of 𝑝𝑚 at time instant Proposed Resource Allocation Method for Load Balancing in
𝑡). The load-balance value is measured by the variance of the Fog Environment.” In this method, Step 1 is the preprocess
resource utilization. The variance value for 𝑝𝑚 is calculated procedure, Step 2 is employed to detect the resource usage
by for Steps 3 and 4, and Step 3 is designed for static resource
2 allocation for the fog services in the same subset and it
𝑊
𝑚 provides the primary resource provision strategies for Step 4.
𝑙𝑏𝑚 (𝑡) = (𝑟𝑢𝑚 (𝑡) − ∑ 𝑅𝑈𝑤 (𝑡) ⋅ 𝛽𝑤 ) . (8)
𝑤=1
Finally, Step 4 is a global resource allocation method to realize
dynamic load balance.
Then, the average variance value for all the type 𝑤
computing nodes is calculated by Specification of Our Proposed Resource Allocation Method for
Load Balancing in Fog Environment
1 𝑀 𝑁 𝑚
𝐿𝐵𝑤 (𝑡) = ∑ ∑ 𝐼 (𝑡) ⋅ 𝑙𝑏𝑚 (𝑡) ⋅ 𝑓𝑚 (𝑡) . (9) Step 1 (fog service partition). There are different types of
𝑎𝑤 (𝑡) 𝑚=1 𝑛=1 𝑛 computing nodes for the performance of fog services. To
efficiently provision resources, the fog services are classified
For the execution period [𝑇0 , 𝑇] in the fog and cloud as several sets based on the resource requirements of node
environment, the load-balance variance could be calculated type. Furthermore, these sets are divided into multiple sub-
by sets according to the request start time.
𝑇
1 Step 2 (spare space detection for computing nodes). To judge
𝐿𝐵𝑤 = ∫ 𝐿𝐵𝑤 (𝑡) 𝑑𝑡. (10)
𝑇 − 𝑇0 𝑇0 whether a computing node is portable to host the fog service,
it is necessary to detect the spare space of all the computing
With these observations, the problem of minimizing the nodes. We analyze the employed resource units through the
variance of load balance can be formulated as follows: analysis of occupation records, and then the spare space of
the computing nodes could be obtained.
min 𝐿𝐵𝑤 , ∀𝑤 = 1, . . . , 𝑊 (11)
Step 3 (static resource allocation for fog service subset).
𝑀
For the fog services in the same service subset, the proper
s.t. 𝑎𝑤 (𝑡) ≤ ∑ 𝛽𝑤𝑚 , (12)
𝑚=1
computing nodes are identified to host these services. When
allocating resource units for a fog service, the computing
𝑁 𝑀 node with the least and enough spare space is selected.
∑ 𝑟𝑛 ≤ ∑ 𝑐𝑚 , (13) Besides, some workloads from the computing nodes with
𝑛=1 𝑚=1
higher resource usage are migrated to the computing nodes
𝑁 with low resource usage.
∑ 𝐼𝑛𝑚 𝑟𝑛 ≤ 𝑐𝑚 , (14)
𝑛=1 Step 4 (load-balance driven global resource allocation). For
all the fog service subsets, we could find the initialized
where ∑𝑁 𝑛=1 𝑟𝑛 in formula (13) represents the resource require- resource allocation strategies in Step 4, and then the dynamic
ments for all services, ∑𝑀𝑚=1 𝑐𝑚 in formula (14) represents the
resource allocation adjustment is conducted at the competi-
capacity of all computing nodes, and ∑𝑁 𝑚
𝑛=1 𝐼𝑛 𝑟𝑛 in formula
tion moments of the fog services to achieve the global load
(14) represents the resource requirements for all services balance during the execution period of the fog services.
allocated to the type 𝑚 computing node.
From (11) to (14), we can find that the final objective 3.2. Fog Service Partition. The fog services from different
solved in this paper is an optimization problem with multiple IoT applications have different requirements of computing
constraints [18–20]. resources; that is, the fog services need to choose different
Wireless Communications and Mobile Computing 5

s1 s2 s1 s3 Input: The resource requirements of IoT applications 𝑅


s1 : (ퟎ, 1.0, ퟏ, 2)
s3 s4 s2 s4 Output: The partitioned fog service set F
s2 : (ퟎ, 0.8, ퟏ, 1)
f1,1 f1,2 (1) for 𝑖 = 1 to 𝑁 do
s3 : (ퟏ, 0.5, ퟏ, 1) f1
(2) for 𝑗 = 1 to 𝑊 do
s4 : (ퟏ, 0.7, ퟏ, 3) //There are 𝑊 types of computing nodes in fog and cloud
s5 : (ퟎ, 1.5, ퟐ, 2) s5 s6 s5 s6 (3) if 𝑡𝑦𝑝𝑒𝑖 == 𝑗 then
s6 : (ퟏ, 1.0, ퟐ, 3) f2,1 f2,2 (4) Add 𝑟𝑖 to 𝑓𝑖
f2
(5) end if
Fog services Service sets Service subsets (6) end for
(7) end for
Figure 2: An example of subset acquisition with 6 fog services (i.e., (8) for 𝑖 = 1 to 𝑊 do
𝑠1 ∼𝑠6 ). (9) 𝑛𝑛 = 0, 𝑞 = 0, 𝑓 = 𝑠𝑡𝑖𝑚0
(10) while 𝑛𝑛 < |𝑓𝑠𝑖 | do
(11) if 𝑠𝑡𝑖𝑚𝑞 ≤ 𝑓 then
types of computing nodes for resource response. Suppose (12) Add the 𝑚th fog service to 𝑓𝑠𝑖,𝑛𝑛
there are 𝑊 types of processing nodes, including the PMs in (13) else 𝑛𝑛 = 𝑛𝑛 + 1, 𝑞 = 𝑞 + 1, 𝑓 = 𝑠𝑡𝑖𝑚𝑞
cloud platforms, the intermediate nodes, and the edge nodes (14) Add the 𝑞th fog service to 𝑓𝑠𝑖,𝑛𝑛
(15) end if
near the sensors.
(16) end while
In this paper, we try to achieve the goal of load balancing (17) end for
for each type of computing node; we assume that the fog ser- (18) Return F
vices need to be performed with the same type of computing
nodes. As a result, the fog services could be partitioned as 𝑊
different service sets, denoted as F = {f1 , f2 , . . . , fW }. Algorithm 1: Fog service subset acquisition.
The resource requirements of the fog services in the same
set have different resource response time. To efficiently realize
resource allocation for the services, the fog services in the to monitor the resource usage of all the computing nodes in
same set should be partitioned to several subsets according the fog and cloud.
to the start time for occupying the resource units of the
computing nodes. Then, we can allocate resource units for the Definition 5 (occupation record 𝑟𝑠𝑚,𝑖 ). The 𝑖th (1 ≤ 𝑖 ≤
fog services in the same set to achieve high load balancing. |𝑟𝑠𝑚 |) occupation record in 𝑟𝑠𝑚 contains the response fog
The subset fw (𝑤 = 1, 2, . . . , 𝑊) in F is divided into service, the occupation start time, the duration time, and
multiple subsets according to the requested start time of the the resource units, which is a 4-tuple, denoted as 𝑟𝑠𝑚,𝑖 =
fog services. Let fw,i be the 𝑖th (1 ≤ 𝑖 ≤ |fw |) subset contained (𝑓𝑠𝑚,𝑖 , 𝑠𝑡𝑚,𝑖 , 𝑑𝑡𝑚,𝑖 , 𝑢𝑠𝑚,𝑖 ), where 𝑓𝑠𝑚,𝑖 , 𝑠𝑡𝑚,𝑖 , 𝑑𝑡𝑚,𝑖 , and 𝑢𝑠𝑚,𝑖 are
in fw . After the partition process, the fog services in 𝑓𝑤,𝑖 have the fog service, the start time, the duration time, and the
the same request start time. resource unit sets of 𝑟𝑠𝑚,𝑖 , respectively.
For example, there are 6 fog services, that is, 𝑠1 ∼𝑠6 , and
the resource requirements of these 6 services are 𝑠1 : (0, 1, 1, A computing node has a set of occupation records, since
2), 𝑠2 : (0, 0.8, 1, 1), 𝑠3 : (1, 0.5, 1, 1), 𝑠4 : (1, 0.7, 1, 3), 𝑠5 : (0, 1.5, it could host several fog services. The record set for the
2, 2), and 𝑠6 : (1, 1, 2, 3), as shown in Figure 2. These 6 fog computing node 𝑝𝑚 (1 ≤ 𝑚 ≤ 𝑀) is recorded as 𝑟𝑠𝑚 . Then,
services are put in two different sets 𝑓1 and 𝑓2 according to the for all the computing nodes in the fog and cloud, there are 𝑀
requested type of computing nodes, 𝑓1 = {𝑠1 , 𝑠2 , 𝑠3 , 𝑠4 } and occupation record sets, denoted as 𝑅𝑆 = {𝑟𝑠1 , 𝑟𝑠2 , . . . , 𝑟𝑠𝑀}.
𝑓2 = {𝑠5 , 𝑠6 }. Then, 𝑓1 and 𝑓2 are partitioned to the subsets In 𝑟𝑠𝑚 , there are many occupation records, which reflect the
according to the requested start time. Partition 𝑓1 is divided resource usage of the computing node 𝑝𝑚 .
into 2 subsets, 𝑓1,1 = {𝑠1 , 𝑠2 } and 𝑓1,2 = {𝑠3 , 𝑠4 }; meanwhile, 𝑓2 The occupation records are dynamically updated accord-
is also separated as 2 subsets, that is, 𝑓2,1 = {𝑠5 } and 𝑓2,2 = {𝑠6 }. ing to the real-time resource provisioning for the fog services.
Algorithm 1 specifies the process of fog services subset Once the fog services are moved to the other computing
acquisition. In Algorithm 1, the input is the resource require- nodes during their lifetime, the occupation time parameter
ments of IoT applications, that is, 𝑅, and the output is the should be updated accordingly for the relevant records. The
partitioned fog service set F. We traverse all the fog services occupation records are generated from the users when they
(Line (1)), and the services are put in different sets according apply for resources in the fog and cloud data center for imple-
to the requested resource type (Lines (1) to (6)), and then the menting the services generated from the IoT applications.
fog services are put in 𝑊 different service sets. Then, the set fw Benefiting from the resource monitoring, the occupied
is divided to multiple subsets, according to the required start resource units of all the computing nodes and the spare units
time (Lines (8) to (17)). could be detected for resource allocation at any time.

3.3. Spare Space Detection for Computing Nodes. The fog Definition 6 (spare space of 𝑝𝑚 ). The spare space of the
services need to be put on the computing nodes; thus, those computing node 𝑝𝑚 is defined as the idle amount of resource
computing nodes with spare space should be identified. For units, which is measured by the difference value of the
all the computing nodes, the allocation records are employed capacity and the occupied amount of resource units on 𝑝𝑚 .
6 Wireless Communications and Mobile Computing

𝑝𝑚 first (Lines (1) to (8)), and then the spare space could be
u1,6 detected (Line (8)).
u1,5 r5
3.4. Static Resource Allocation for Fog Service Subset. Based
on the fog service partition in Section 3.1 and spare space
Resource Units

u1,4 r5
detection for computing nodes in Section 3.2, the fog services
u1,3 r4 that need to be processed and the available computing
resources for fog service execution are identified, which are
u1,2 r2 r3 all beneficial for resource allocation.
In the fog environment, the fog services need to be
u1,1 r1 r3 r6
responded to by the computing nodes, and the time require-
ments also should be presented when allocating the resources
0 0.5 1.0 1.5 2.0 2.5 3.0 to the fog services. In this section, we define the resource
Time instants allocation records to reserve the allocation history about
Figure 3: An example of spare space detection with 6 occupation
resource provisioning for the fog services.
records (i.e., 𝑟1 ∼𝑟6 ).
Definition 7 (resource allocation record for 𝑠𝑛 ). The resource
allocation record for 𝑠𝑛 consists of the node type, the number
of resource units, the start time, and the desired duration
time, which is denoted as 𝑎𝑛 = (𝑛𝑡𝑛 , 𝑛𝑢𝑚𝑛 , 𝑠𝑡𝑛 , 𝑑𝑡𝑛 ), where
Input: The occupation record for the computing node 𝑝𝑚
𝑛𝑡𝑛 , 𝑛𝑢𝑚𝑛 , 𝑠𝑡𝑛 , and 𝑑𝑡𝑛 are the node type, the amount
Output: The spare space for 𝑝𝑚
(1) 𝑐𝑜𝑢 = 0 of resource units, the service start time, and the resource
(2) for 𝑖 = 1 to |𝑟𝑠𝑚 | do occupation time for 𝑠𝑛 , respectively.
(3) 𝑐𝑡𝑚,𝑖 = 𝑠𝑡𝑚,𝑖 + 𝑑𝑡𝑚,𝑖
//𝑐𝑡𝑚,𝑖 is the finish time for the occupation of resource units The fog services in the same fog service subset have the
(4) if 𝑡 ≥ 𝑠𝑡𝑚,𝑖 && 𝑡 < 𝑐𝑡𝑚,𝑖 then same required node type and start time. When allocating
//𝑡 is the request time for statistics resource units for the fog service subset, each fog service
(5) 𝑐𝑜𝑢 = 𝑐𝑜𝑢 + |𝑢𝑠𝑚,𝑖 | should find a portable computing node to host it. Thus, we
(6) end if need to find the available nodes first. The computing nodes
(7) end for with the requested type, which have spare space, which could
(8) 𝑐𝑜𝑢 = 𝑐𝑚 − 𝑐𝑜𝑢 be detected by Algorithm 2, are chosen as the candidate
(9) Return cou resources to be provided for the fog services in the subset.
To achieve the load balancing of the computing nodes, we
Algorithm 2: Spare space detection for computing nodes. try to achieve high resource usage of the employed computing
nodes. The problem of static resource allocation is like bin
packing problem, which is NP-hard. Here, we leverage the
idea of Best Fit Decreasing to realize the process of computing
The spare space of the computing node 𝑝𝑚 could be node matching for the fog services. Before resource alloca-
detected from the analysis of occupation records. In these tion, the fog services are sorted in the decreasing order of
records, if occupation start time is less than the statistic time requested resource amount. The service with more required
instant for checking the PM status and the occupation finish resource units will be processed first. And it chooses the
time is over the statistic time, the relevant resource units computing node with the least and enough spare space for
combined in the occupation records could be obtained. With hosting the service.
these acquired resource units and the resource capacity of 𝑝𝑚 , For example, there are 3 computing nodes 𝑃1 , 𝑃2 , and
the spare space could be finally detected. 𝑃3 , and the spare spaces of these 3 computing nodes are 4,
For example, there are 6 occupation records for the 6, and 9, respectively, as shown in Figure 4. There are two
computing node 𝑝1 , that is, 𝑟1 : (1, 0, 0.5, {𝑢1,1 }), 𝑟2 : (2, 0.3, fog services in the same subset, that is, 𝑆1 and 𝑆2 , and the
0.7, {𝑢1,2 }), 𝑟3 : (3, 1.3, 1, {𝑢1,1 , 𝑢1,2 }), 𝑟4 : (4, 1.4, 1.2, {𝑢1,3 }), requested resource amounts of these two services are 2 and
𝑟5 : (5, 1.4, 0.8, {𝑢1,4 , 𝑢1,5 }), and 𝑟6 : (6, 2.5, 0.5, {𝑢1,1 }), as 6. When conducting resource allocation for 𝑆1 and 𝑆2 , 𝑆2 has
shown in Figure 3. If the statistic instant for spare space more resource requirements, and thus 𝑆2 is processed first.
detection is 1.5, the identified occupation records within the After resource allocation, 𝑆2 chose 𝑃3 for hosting and 𝑆1 chose
statistic time are 𝑟3 , 𝑟4 , and 𝑟5 . Based on the analysis of 𝑃1 for hosting.
the occupation records, the employed resource units at this The above allocation may lead to the unbalanced dis-
moment are 𝑢1,1 , 𝑢1,2 , 𝑢1,3 , 𝑢1,4 , and 𝑢1,5 . Suppose the capacity tribution of the workloads of some computing nodes. In
of 𝑝1 is 6; then, the spare space of 𝑝1 at time instant 1.5 is 1. this section, a threshold 𝜌 is employed to judge whether the
Algorithm 2 specifies the key idea of spare space detection computing node is in low resource utilization. If a computing
for computing nodes. In Algorithm 2, the input and the node is in low resource utilization and there are no other
output are the occupation records of the computing node 𝑝𝑚 computing nodes that could host the workloads in this
and the spare space for 𝑝𝑚 . According to Definition 7, we computing node, we choose to move some workloads to this
need to calculate the occupation amount of resource units on computing node to improve the load balance.
Wireless Communications and Mobile Computing 7

Input: The fog service subset fw,i


Output: The relevant resource allocation records
(1) Get the node type nt in the service subset fw,i
P1 P2 P3 (2) Sort fw,i in decreasing order of required resource amount
(3) for each fog service in fw,i do
Computing nodes
(4) for 𝑖 = 1 to 𝑀 do
(5) if 𝑝𝑚 has the same type with 𝑛𝑡 then
P1 P2 P3
S1 S2 (6) Get the spare space byAlgorithm 2
After allocation (7) if 𝑝𝑚 has enough space to host the service then
Fog services (8) Add the computing node to 𝐶𝐿
(9) end if
Before allocation (10) end if
Figure 4: An example of static resource allocation for fog services (11) end for
𝑆1 and 𝑆2 , with computing nodes 𝑃1 , 𝑃2 , and 𝑃3 . (12) Sort 𝐶𝐿 in increasing order of spare space
(13) Put the service in the first computing node in 𝐶𝐿
(14) end for
(15) flag = 1, 𝑖 = 1
(16) Identify the occupied computing nodes from 𝐶𝐿 to 𝐶𝐿󸀠
After resource allocation, there are several allocation (17) Sort 𝐶𝐿󸀠 in the decreasing order of spare space
records generated, to record the allocation history for the (18) while flag == 1 do
fog services in the service subset. Meanwhile, there are some (19) if the resource usage of 𝑐𝑙󸀠𝑖 is less than 𝜌 then
computing nodes, providing resource units for executing the (20) Select the tasks to migrate to the computing node
fog services, which also generate some occupation records. (21) 𝑖=𝑖+1
Algorithm 3 illustrates the key idea of static resource (22) else flag = 0
allocation for fog service subset fw,i . The input of Algorithm 3 (23) end if
is the fog service subset fw,i , and the output of this algorithm (24) end while
is the resource allocation records for the fog services in fw,i . (25) Update the relevant occupation records
The required computing nodes of fog services in the same (26) Generate an allocation records
subset have the same node type, which should be obtained
first (Line (1)). The services with fewer requested resource Algorithm 3: Static resource allocation for fog service subset.
units will be processed first, so fw,i should be sorted in
the increasing order of the amount of values of required
resources. Then, each service could be responded to with
computing nodes sequentially (Line (3)). When selecting a The fog service subsets demand the same type of comput-
computing node to provision resources for a fog service, the ing nodes for hosting sequentially according to the requested
available computing nodes with enough spare space to host start time of the resources. Let 𝑏𝑡𝑤,𝑖 be the requested start
the service, calculated by Algorithm 2, should be achieved time for resource occupation of the 𝑖th subset 𝑓𝑤,𝑖 in fw . For
first (Lines (4) to (11)). The computing nodes also should be dynamic adjustment of resource allocation at the finish time
sorted in the increasing order of spare space (Line (12)). Then, of the fog services in 𝑏𝑡𝑤,𝑖 (1 ≤ 𝑖 < |fw |), the scheduling
the computing node with the least spare space will be selected time should be with the time period (𝑏𝑡𝑤,𝑖 , 𝑏𝑡𝑤,𝑖+1 ). When
to accommodate the service (Line (13)). Some workloads 𝑖 = |fw |, the scheduling time should be the competition time
from the computing nodes with higher resource usage will for the rest of the fog service loads during the period for the
be migrated to the computing nodes with low resource execution of the fog services in fw,i .
utilization to improve load balance (Lines (15) to (24)). At the scheduling time, the running workloads occupy
Finally, the relevant occupation records and the resource some computing nodes, and these computing nodes may
allocation record for the fog service should be updated (Lines be with low resource usage due to the service competition.
(25) and (26)). The computing nodes with the same type in fog or cloud
could be sorted in the decreasing order of the resource
3.5. Load-Balance Driven Global Resource Allocation. From usage. The workloads in the computing nodes with the
the analysis in Sections 3.2 and 3.4, the initialized resource lower resource usage could be migrated to the computing
allocation is conducted, and the different types of computing nodes with higher resource usage, to achieve higher resource
nodes could achieve high resource utilization and load bal- utilization. Besides, the migration of workloads could also
ancing at the allocation moment. However, the static resource help to realize the goal of load balancing as the computing
allocation only could achieve the temporary load balancing nodes with more spare space could be moved vacant and shut
at the service arrival moments. During the service running, down further.
the resource usage of the computing nodes is dynamically For the workloads from different fog services, it is
changed due to the various lifetimes of the fog services. In necessary to find the destination computing nodes to host
this section, a global resource allocation strategy is designed them. The selection of destination computing nodes decides
to make load balancing come true during the execution of the on the resource requirements of the workloads and the spare
fog services. space of the computing nodes. If all the workloads from the
8 Wireless Communications and Mobile Computing

Table 1: Parameter settings.


Input: The fog service set 𝑆
Output: The resource allocation records Parameter Domain
The occupation records on computing nodes Number of fog services {500, 1000, 1500, 2000}
(1) Obtain fog service subset F by Algorithm 1 Number of computing nodes 3000
(2) for 𝑖 = 1 to 𝑊 do Number of node types 3
(3) for 𝑗 = 1 to |𝑓𝑤 | do
(4) Algorithm 3 Static resource allocation for fw,j Resource capacity {7, 12, 18}
(5) Calculate 𝐶𝑇 Resource requirements of fog services [1, 15]
// CT is the competition time list Duration for each service [0.1, 4.8]
// CT = {𝑐𝑡1 , 𝑐𝑡2 , . . . , 𝑐𝑡𝐾 }
(6) for 𝑘 = 1 to 𝐾 do
(7) Update the current run list in fw,j records for the running services (Line (6)). The fog services
(8) for 𝑙 = 1 to 𝑀 do on the computing node with less spare space would be moved
(9) Get spare space by Algorithm 2 at 𝑐𝑡𝑘
to the computing node with higher resource usage, which has
(10) if𝑝𝑙 has spare space and is not empty then
(11) Add 𝑝𝑙 to 𝑆𝐿 enough spare space to host the services (Lines (8) to (20)).
(12) end if When all the fog services on a computing node could find
(13) end for the destination node, the relevant allocation records and the
(14) Sort SL in increasing order of spare space occupation records for the resource units will be generated
(15) flag = 1, 𝑞 = 1 and updated (Lines (15) to (27)).
(16) while flag == 1 do
(17) Get the occupied resources sets on 𝑠𝑙𝑞 4. Experimental Analysis
(18) for each occupied resource set do
(19) Confirm the destination PM In this section, the cloud simulator Cloudsim is applied to
(20) end for evaluate our proposed method DRAM. The intermediate
(21) if the resource sets can be moved then computing nodes and the edge computing nodes are simu-
(22) 𝑞=𝑞+1 lated as two computing data centers. The resource allocation
(23) Update the relevant allocation records
method for fog environment is NP-hard, like the bin packing
(24) Update the occupation records
(25) else flag = 0
problem; thus, the typical and efficient resource allocation
(26) end if methods FF, BF, FFD, and BFD are employed for comparison
(27) end while analysis.
(28) end for
(29) end for 4.1. Experimental Context. To discuss the effectiveness of
(30) end for DRAM, 4 datasets with different scale of fog services are
utilized, which are shared at https://drive.google.com/drive/
folders/0B0T819XffFKrZTV4MFdzSjg0dDA?usp=sharing.
Algorithm 4: Load-balance driven global resource allocation.
The parameters for experimental evaluation are presented in
Table 1.
The fog services employ three types of computing nodes,
same computing node could find the destination computing that is, the edge computing node, the intermediate node,
nodes, these workloads could be migrated to the destination and the PMs in cloud for resource response. The number of
computing nodes. Finally, the resource allocation records and services for each type of computing node contained in the
the occupation records are generated or updated according to 4 different datasets is shown in Figure 5. For example, when
the real occupation computing nodes and the usage time of the number of fog services is 1000, there are 324 fog services
the corresponding resource units. that need edge computing nodes, 353 fog services that need
Algorithm 4 illustrates the key process of load-balance intermediate computing nodes, and 323 fog services that need
driven global resource allocation. The key idea of Algorithm 4 PMs in the remote cloud for resource response.
is to conduct static resource allocation for the fog services at
the start execution time and dynamically adjust the service 4.2. Performance Evaluation. Our proposed method tends to
placement according to the resource usage of all the employed minimize the load-balance variance, which is relevant to the
computing nodes. The input of this algorithm is the fog resource utilization of each computing node and the average
service set 𝑆, and the final output of this algorithm is the resource utilization. Therefore, we conduct performance
resource allocation records and the occupation records. In evaluation for this fog computing system on the employed
this algorithm, the fog service subset is achieved first by number of computing nodes, resource utilization, and load-
Algorithm 1 (Line (1)), and then we traverse each subset for balance variance.
resource allocation (Line (2)) and conduct static resource
allocation for the subsets by Algorithm 3 (Line (3)). Then, for (1) Performance Evaluation on the Employed Number of
each subset, the competition time list 𝐶𝐿 is extracted for load- Computing Nodes. The number of the computing nodes could
balance driven dynamic resource allocation (Line (5)). Then, reflect the efficiency of resource usage. Figure 6 shows the
at each competition instant, we adjust the resource allocation comparison of the number of employed computing nodes
Wireless Communications and Mobile Computing 9

800 As there are 3 types of computing nodes in our experi-


Number of services for computing nodes

713 mental evaluations, we should evaluate DRAM for the dif-


700
637 650 ferent types of computing nodes, compared to the other four
600 methods. The 4 subfigures in Figure 7 show the comparison of
517 494 the number of the employed computing nodes with different
500 489
types by FF, BF, FFD, BFD, and DRAM using different
400
324
353
323 datasets. It is intuitive from Figure 7 that our proposed
300 method DRAM is fit for all kinds of computing nodes, which
200 179 167
employs fewer computing nodes than FF, BF, and FFD, and
154 gets similar performance to BFD in most cases.
100

0
(2) Performance Evaluation on Resource Utilization. The
500 1000 1500 2000 resource utilization is a key factor to decide the load-balance
Number of fog services for resource allocation variance; thus we evaluate this value to discuss the resource
usage achieved by FF, BF, FFD, BFD, and DRAM with
Edge Computing Node different datasets. The resource utilization is referenced to
Intermediate Computing Node
PMs in Cloud
the resource usage of the resource units on the computing
nodes. Figure 8 shows the comparison of average resource
Figure 5: Number of fog services for the 3 types of computing nodes utilization by FF, BF, FFD, BFD, and DRAM with different
with different datasets. datasets. It is intuitive from Figure 8 that DRAM could obtain
better resource utilization than FF, BF, FFD, and BFD, since
DRAM is a dynamic and adaptive method which could adjust
the load distribution during the fog service execution.
140
Similar to the evaluation on the employed amount of
Number of employed computing nodes

120 computing nodes, the performance evaluation on resource


utilization is conducted from the perspective of the different
100 types of the computing nodes. Figure 9 shows the comparison
80 of resource utilization for different types of computing nodes
by FF, BF, FFD, BFD, and DRAM with different datasets.
60 From Figure 9, we can find that DRAM could achieve higher
resource utilization than FF, BF, FFD, and BFD. For example,
40
in Figure 9(c), when the number of fog services is 1500,
20 DRAM obtains the resource utilization over 80%, whereas
FF, BF, FFD, and BFD obtain near or below 70% resource
0 utilization for each type of computing node.
500 1000 1500 2000
Number of fog services for resource allocation
(3) Performance Evaluation on Load-Balance Variance. The
FF BFD evaluation of the load-balance variance is also conducted by
BF DRAM FF, BF, FFD, BFD, and DRAM using 4 different scales of
FFD datasets. Figure 10 shows the comparison of average load-
balance variance, where we can find that our proposed
Figure 6: Comparison of the number of employed computing nodes
method DRAM is superior to the other methods, that is, FF,
by FF, BF, FFD, BFD, and DRAM with different datasets.
BF, FFD, and BFD. For example, when the number of fog
services is 500, the load-balance variance obtained by DRAM
is near 2.5 × 10−2 , whereas FF, BF, FFD, and BFD obtain the
load-balance value over 3 × 10−2 .
by FF, BF, FFD, BFD, and DRAM by using the 4 different The evaluation on the load-balance variance also should
scales of datasets. From Figure 6, we can find that our take into consideration the computing node type. Figure 11
proposed method DRAM as well as BFD could employ fewer shows the comparison of load-balance variance values for
computing nodes, compared with FF, BF, and FFD, when different types of computing nodes by FF, BF, FFD, BFD, and
the number of fog services is 500, 1000, and 2000. When DRAM with different datasets. From Figure 11, we can find
the number of fog services is 1500, our proposed DRAM that when changing the scale of datasets, our method can
could even get higher efficiency on the employed number of keep the priority on the load-balance variance for each type
computing nodes than BFD. In the process of static resource of computing node.
allocation, DRAM leverages the basic idea of BFD; thus, in
most cases, DRAM and BFD have similar performance on the 5. Related Work
amount of employed computing nodes. But when some of the
computing nodes are spared through the process of DRAM, The IoT technology has been widely applied in many fields,
thus in some cases, DRAM is superior to BFD. including weather forecasting and traffic monitoring. The
10 Wireless Communications and Mobile Computing

25 45

Number of employed computing nodes


Number of employed computing nodes
40
20 35
30
15
25
20
10
15

5 10
5
0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node

FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(a) Number of fog services = 500 (b) Number of fog services = 1000
50 50
Number of employed computing nodes

Number of employed computing nodes


45 45
40 40
35 35
30 30
25 25
20 20
15 15
10 10
5 5
0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node

FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(c) Number of fog services = 1500 (d) Number of fog services = 2000

Figure 7: Comparison of the number of the employed computing nodes with different types by FF, BF, FFD, BFD, and DRAM using different
datasets.

100
90
Average resource utilization (%)

80
70
60
50
40
30
20
10
0
500 1000 1500 2000
Number of fog services for resource allocation

FF BFD
BF DRAM
FFD

Figure 8: Comparison of average resource utilization by FF, BF, FFD, BFD, and DRAM with different datasets.
Wireless Communications and Mobile Computing 11

100 100
90 90
80 80
Resource Utilization (%)

Resource Utilization (%)


70 70
60 60
50 50
40 40
30 30
20 20
10 10
0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node

FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(a) Number of fog services = 500 (b) Number of fog services = 1000
100 100
90 90
80 80
Resource Utilization (%)
Resource Utilization (%)

70 70
60 60
50 50
40 40
30 30
20 20
10 10
0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node

FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(c) Number of fog services = 1500 (d) Number of fog services = 2000

Figure 9: Comparison of resource utilization for different types of computing nodes by FF, BF, FFD, BFD, and DRAM with different datasets.

×10−2
6
Average load balance variance

0
500 1000 1500 2000
Number of fog services for resource allocation

FF BFD
BF DRAM
FFD

Figure 10: Comparison of average load-balance variance by FF, BF, FFD, BFD, and DRAM with different datasets.
12 Wireless Communications and Mobile Computing

−2
6 ×10
−2
6 ×10

5 5
Load Balance Variance

Load Balance Variance


4 4

3 3

2 2

1 1

0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node

FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(a) Number of fog services = 500 (b) Number of fog services = 1000
−2
×10 ×10−2
6 6

5 5
Load Balance Variance

Load Balance Variance

4 4

3 3

2 2

1 1

0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node

FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(c) Number of fog services = 1500 (d) Number of fog services = 2000

Figure 11: Comparison of load-balance variance values for different types of computing nodes by FF, BF, FFD, BFD, and DRAM with different
datasets.

data storage and processing usually benefit from the cloud [27] compared the cloud computing on computing perfor-
computing which provides scalable and elastic resources for mance with fog computing in the 5G network, reflecting
executing the IoT applications [21–26]. In the ever-expanding the superiority of fog computing. Akrivopoulos et al. [28]
data volume, cloud computing is difficult to provide efficient, presented a technology to respond to the combination of IoT
low-latency computing services, and fog computing is pro- applications and the fog computing and introduced the use of
posed to complement the above shortage of cloud computing fog computing in an automated medical monitoring platform
[1, 2]. to improve the medical work for the patients. Arfat et al. [29]
Compared to the remote cloud computing center, fog not only unprecedentedly proposed the integration of mobile
computing is closer to the Internet of Things devices and applications, big data analysis, and fog computing, but also
sensors. Fog computing can quickly solve lightweight tasks introduced Google Maps as an example, showing the system
with fast response. In the era of big data, with cloud comput- of information feedback diversification. Taneja and Davy [30]
ing expansion, fog computing is widely used in the medical, studied the computational efficiency in the fog environment
transportation, and communication fields, to name a few and constructed a resource-aware placement.
[21, 27–30]. Generally, resource allocation refers to the allocation
Hu et al. [21] designed a fog calculation framework in of specific, limited resources and effective management to
detail and compared it with the traditional cloud computing, achieve the optimal use of resources. The original intention of
and a practical application case in the fog computing envi- cloud computing is to allocate network resources on demand,
ronment was put forward. Similarly, Kitanov and Janevski so that it is the same as the use of water and electricity billing
Wireless Communications and Mobile Computing 13

[31, 32]. In the cloud computing, resource allocation method environment, the IoT applications are performed by the edge
can effectively help to achieve the goal of high resource usage computing nodes and the intermediate computing nodes in
and energy saving for centralized resource management for the fog, as well as the physical machines in the cloud plat-
different types of applications [4, 33–35]. Fog computing, forms. To achieve the dynamic load balancing for each type
as an extension of cloud computing paradigm, also needs of computing node in the fog and cloud, a dynamic resource
to conduct resource allocation to achieve high-efficiency allocation method, named DRAM, for load balancing has
resource usage. been developed in this paper. Firstly, a system framework
Mashayekhy et al. [36] proposed an auction-based online in fog computing was presented and load balancing for the
mechanism; it can access in real time the actual needs of computing nodes is analyzed accordingly. Then, the DRAM
users and the allocation of appropriate resources to the user method has been implemented based on the static resource
price. Kwak et al. [37] developed a DREAM algorithm for allocation and dynamic resource scheduling for fog services.
complex tasks in mobile devices, saving 35% of total energy As a result, the experimental evaluations and comparison
and managing network resources. To address the challenges analysis were carried out to verify the validity of our proposed
of high latency and resource shortage in clouds, Alsaffar et method.
al. [38] proposed the resource management framework, col- For future work, we try to analyze the negative impact
laborating the cloud computing and the fog computing, and of the service migration, including the traffic for different
then they optimized the resource allocation in fog computing. types of computing nodes, the cost for service migration,
Xiang et al. [39] designed a RAN (F-RAN) architecture based the performance degradation for the service migration, and
on atomization calculation, which effectively achieved the the data transmission cost. Furthermore, we will design a
high resource usage and could coordinate the global resource corresponding method to balance the negative effects and the
scheduling. positive impacts for service migration.
Load balancing is an effective factor to determine the
resource allocation strategy. For multiple computing tasks, Key Terms and Descriptions Involved in
load balancing could promote the resource managers to Resource Scheduling in Fog Environment
assign these tasks to multiple computing nodes for execution.
The realization of load balancing not only can save the cost of 𝑀: The number of computing nodes
hardware facilities but also can improve resource efficiency. 𝑃: The set of computing nodes,
Banerjee and Hecker [40] proposed a distributed resource 𝑃 = {𝑝1 , 𝑝2 , . . . , 𝑝𝑀}
allocation protocol algorithm to realize load balancing in 𝑁: The number of services
a large-scale distributed network; as a result, compared to 𝑆: The set of services, 𝑆 = {𝑠1 , 𝑠2 , . . . , 𝑠𝑁}
the FIFO, the response time and resource utilization could 𝑝𝑚 : The 𝑚th (1 ≤ 𝑚 ≤ 𝑀) computing node in
be greatly improved. Govindaraju and Duran-Limon [41] 𝑃
designed a method based on the lifecycle-related Service 𝑠𝑛 : The 𝑛th (1 ≤ 𝑛 ≤ 𝑁) service in 𝑆
Level Agreement (SLA) parameters of the virtual machines 𝑐𝑚 : The resource capacity of the 𝑚th
in cloud environment to address resource utilization and cost computing node 𝑝𝑚
issues. Evolution algorithms are proved to be powerful to 𝑟𝑛 : The set of resource requirements of the
solve the multiobjective problem, which could be leveraged 𝑛th service 𝑠𝑛
in the resource scheduling in the fog computing [42]. Jeyakr- 𝑊: The number of types of processing nodes
ishnan and Sengottuvelan [43] developed a new algorithm, 𝛽𝑤𝑚 : A flag to judge whether 𝑝𝑚 is a type 𝑤 of
while saving operating costs, while maximizing the use of computing nodes
resources, in the balanced scheduling compared to SA, PSO, 𝑟𝑢𝑚 (𝑡): The resource utilization for 𝑝𝑚 at time 𝑡
and ADS being more outstanding. 𝑅𝑈𝑤 (𝑡): The resource utilization for the 𝑤th type
For the load balancing maximization problem solved in computing nodes at time 𝑡
this paper, the traditional operations research is proved to be 𝑙𝑏𝑚 (𝑡): The load-balance variance for 𝑝𝑚 at time 𝑡
efficient in optimization problem with constraints [44, 45]. 𝐿𝐵𝑤 (𝑡): The load-balance variance for the 𝑤th type
The game theory is also efficient for the resource allocation computing nodes at time 𝑡
with resource competition, and the Nash Equilibria are often 𝐿𝐵𝑤 : The average load-balance variance for the
needed to be verified first [46, 47]. 𝑤th type computing nodes.
To the best of our knowledge, there are few studies
focusing on the resource allocation of fog services in the fog Conflicts of Interest
environment which aims to realize the load balancing for the
computing nodes in both fog and cloud. The authors declare that they have no conflicts of interest.

6. Conclusion and Future Work Acknowledgments


In recent years, IoT has been one of the most popular This research is supported by the National Natural Science
technologies for daily lives. With rapid development of IoT, Foundation of China under Grants nos. 61702277, 61672276,
fog computing is emerging as one of the most powerful 61772283, 61402167, and 61672290, the Key Research and
paradigms for processing the IoT applications. In the fog Development Project of Jiangsu Province under Grants nos.
14 Wireless Communications and Mobile Computing

BE2015154 and BE2016120, and the Natural Science Foun- [12] X. Xu, X. Zhang, M. Khan, W. Dou, S. Xue, and S. Yu,
dation of Jiangsu Province (Grant no. BK20171458). Besides, “A balanced virtual machine scheduling method for energy-
this work is also supported by the Startup Foundation for performance trade-offs in cyber-physical cloud systems,” Future
Introducing Talent of NUIST, the Open Project from State Generation Computer Systems, 2017.
Key Laboratory for Novel Software Technology, Nanjing [13] L. Yu, L. Chen, Z. Cai, H. Shen, Y. Liang, and Y. Pan,
University, under Grant no. KFKT2017B04, the Priority Aca- “Stochastic Load Balancing for Virtual Resource Management
demic Program Development of Jiangsu Higher Education in Datacenters,” IEEE Transactions on Cloud Computing, pp. 1–
Institutions (PAPD) fund, Jiangsu Collaborative Innova- 14, 2016.
tion Center on Atmospheric Environment and Equipment [14] Y. Sahu, R. K. Pateriya, and R. K. Gupta, “Cloud server opti-
Technology (CICAEET), and the project “Six Talent Peaks mization with load balancing and green computing techniques
using dynamic compare and balance algorithm,” in Proceedings
Project in Jiangsu Province” under Grant no. XYDXXJS-040.
of the 5th International Conference on Computational Intelligence
Special thanks are due to Dou Ruihan, Nanjing Jinling High and Communication Networks, CICN 2013, pp. 527–531, India,
School, Nanjing, China, for his intelligent contribution to our September 2013.
algorithm discussion and experiment development. [15] G. Soni and M. Kalra, “A novel approach for load balancing
in cloud data center,” in Proceedings of the 2014 4th IEEE
References International Advance Computing Conference, IACC 2014, pp.
807–812, India, February 2014.
[1] Y. Kong, M. Zhang, and D. Ye, “A belief propagation-based [16] X. Xu, W. Dou, X. Zhang, and J. Chen, “EnReal: An Energy-
method for task allocation in open and dynamic cloud environ- Aware Resource Allocation Method for Scientific Workflow
ments,” Knowledge-Based Systems, vol. 115, pp. 123–132, 2017. Executions in Cloud Environment,” IEEE Transactions on Cloud
[2] S. Sarkar, S. Chatterjee, and S. Misra, “Assessment of the Computing, vol. 4, no. 2, pp. 166–179, 2016.
Suitability of Fog Computing in the Context of Internet of [17] S. Li and Y. Zhang, “On-line scheduling on parallel machines
Things,” IEEE Transactions on Cloud Computing, vol. 6, no. 1, to minimize the makespan,” Journal of Systems Science &
pp. 46–59, 2015. Complexity, vol. 29, no. 2, pp. 472–477, 2016.
[3] K. Peng, R. Lin, B. Huang, H. Zou, and F. Yang, “Link impor- [18] G. Wang, X. X. Huang, and J. Zhang, “Levitin-Polyak well-
tance evaluation of data center network based on maximum posedness in generalized equilibrium problems with functional
flow,” Journal of Internet Technology, vol. 18, no. 1, pp. 23–31, 2017. constraints,” Pacific Journal of Optimization. An International
Journal, vol. 6, no. 2, pp. 441–453, 2010.
[4] E. Luo, M. Z. Bhuiyan, G. Wang, M. A. Rahman, J. Wu, and
[19] B. Qu and J. Zhao, “Methods for solving generalized Nash
M. Atiquzzaman, “PrivacyProtector: Privacy-Protected Patient
equilibrium,” Journal of Applied Mathematics, vol. 2013, Article
Data Collection in IoT-Based Healthcare Systems,” IEEE Com-
ID 762165, 2013.
munications Magazine, vol. 56, no. 2, pp. 163–168, 2018.
[20] S. Lian and Y. Duan, “Smoothing of the lower-order exact
[5] P. Li, S. Zhao, and R. Zhang, “A cluster analysis selection strategy penalty function for inequality constrained optimization,” Jour-
for supersaturated designs,” Computational Statistics & Data nal of Inequalities and Applications, Paper No. 185, 12 pages,
Analysis, vol. 54, no. 6, pp. 1605–1612, 2010. 2016.
[6] G.-L. Tian, M. Wang, and L. Song, “Variable selection in the [21] P. Hu, S. Dhelim, H. Ning, and T. Qiu, “Survey on fog com-
high-dimensional continuous generalized linear model with puting: architecture, key technologies, applications and open
current status data,” Journal of Applied Statistics, vol. 41, no. 3, issues,” Journal of Network and Computer Applications, vol. 98,
pp. 467–483, 2014. pp. 27–42, 2017.
[7] S. Wang, T. Lei, L. Zhang, C.-H. Hsu, and F. Yang, “Offloading [22] A. V. Dastjerdi and R. Buyya, “Fog Computing: Helping the
mobile data traffic for QoS-aware service provision in vehicular Internet of Things Realize Its Potential,” The Computer Journal,
cyber-physical systems,” Future Generation Computer Systems, vol. 49, no. 8, Article ID 7543455, pp. 112–116, 2016.
vol. 61, pp. 118–127, 2016. [23] J. Shen, T. Zhou, D. He, Y. Zhang, X. Sun, and Y. Xiang,
“Block design-based key agreement for group data sharing in
[8] S. Yi, C. Li, and Q. Li, “A survey of fog computing: concepts,
cloud computing,” IEEE Transactions on Dependable and Secure
applications and issues,” in Proceedings of the Workshop on
Computing, vol. PP, no. 99, 2017.
Mobile Big Data (Mobidata ’15), pp. 37–42, ACM, Hangzhou,
China, June 2015. [24] Z. Xia, X. Wang, L. Zhang, Z. Qin, X. Sun, and K. Ren, “A
privacy-preserving and copy-deterrence content-based image
[9] X. L. Xu, X. Zhao, F. Ruan et al., “Data placement for privacy- retrieval scheme in cloud computing,” IEEE Transactions on
aware applications over big data in hybrid clouds,” Security Information Forensics and Security, vol. 11, no. 11, pp. 2594–2608,
and Communication Networks, vol. 2017, Article ID 2376484, 15 2016.
pages, 2017.
[25] J. Shen, D. Liu, J. Shen, Q. Liu, and X. Sun, “A secure cloud-
[10] X. Xu, W. Dou, X. Zhang, C. Hu, and J. Chen, “A traffic hotline assisted urban data sharing framework for ubiquitous-cities,”
discovery method over cloud of things using big taxi GPS data,” Pervasive and Mobile Computing, vol. 41, pp. 219–230, 2017.
Software: Practice and Experience, vol. 47, no. 3, pp. 361–377, [26] Z. Fu, X. Wu, C. Guan, X. Sun, and K. Ren, “Toward efficient
2017. multi-keyword fuzzy search over encrypted outsourced data
[11] F. Bonomi, R. Milito, P. Natarajan, and J. Zhu, “Fog computing: with accuracy improvement,” IEEE Transactions on Information
A platform for internet of things and analytics,” in Big Data and Forensics and Security, vol. 11, no. 12, pp. 2706–2716, 2016.
Internet of Things: A Roadmap for Smart Environments, vol. 546 [27] S. Kitanov and T. Janevski, “Energy efficiency of Fog Computing
of Studies in Computational Intelligence, pp. 169–186, 2014. and Networking services in 5G networks,” in Proceedings of
Wireless Communications and Mobile Computing 15

the 17th IEEE International Conference on Smart Technologies, Conference on Utility and Cloud Computing, UCC 2016, pp. 410–
EUROCON 2017, pp. 491–494, Macedonia, July 2017. 415, China, December 2016.
[28] O. Akrivopoulos, I. Chatzigiannakis, C. Tselios, and A. Anto- [42] Y. Yuan, H. Xu, B. Wang, and X. Yao, “A new dominance
niou, “On the Deployment of Healthcare Applications over relation-based evolutionary algorithm for many-objective opti-
Fog Computing Infrastructure,” in Proceedings of the 41st mization,” IEEE Transactions on Evolutionary Computation, vol.
IEEE Annual Computer Software and Applications Conference 20, no. 1, pp. 16–37, 2016.
Workshops, COMPSAC 2017, pp. 288–293, Italy, July 2017. [43] V. Jeyakrishnan and P. Sengottuvelan, “A Hybrid Strategy for
[29] Y. Arfat, M. Aqib, R. Mehmood et al., “Enabling Smarter Resource Allocation and Load Balancing in Virtualized Data
Societies through Mobile Big Data Fogs and Clouds,” in Centers Using BSO Algorithms,” Wireless Personal Communi-
Proceedings of the International Workshop on Smart Cities cations, vol. 94, no. 4, pp. 2363–2375, 2017.
Systems Engineering (SCE 2017), vol. 109, pp. 1128–1133, Procedia [44] H. Wu, Y. Ren, and F. Hu, “Continuous dependence property
Computer Science, Portugal, May 2017. of BSDE with constraints,” Applied Mathematics Letters, vol. 45,
[30] M. Taneja and A. Davy, “Resource Aware Placement of Data pp. 41–46, 2015.
Analytics Platform in Fog Computing,” in Proceedings of the 2nd [45] Y. Wang, X. Sun, and F. Meng, “On the conditional and
International Conference on Cloud Forward: From Distributed to partial trade credit policy with capital constraints: a Stackelberg
Complete Computing, CF 2016, pp. 153–156, Procedia Computer model,” Applied Mathematical Modelling: Simulation and Com-
Science, Spain, October 2016. putation for Engineering and Environmental Systems, vol. 40, no.
[31] N. Fernando, S. W. Loke, and W. Rahayu, “Mobile cloud 1, pp. 1–18, 2016.
computing: a survey,” Future Generation Computer Systems, vol. [46] J. Zhang, B. Qu, and N. Xiu, “Some projection-like methods for
29, no. 1, pp. 84–106, 2013. the generalized Nash equilibria,” Computational optimization
[32] S. S. Manvi and G. Krishna Shyam, “Resource management for and applications, vol. 45, no. 1, pp. 89–109, 2010.
Infrastructure as a Service (IaaS) in cloud computing: a survey,” [47] C. Wang, C. Ma, and J. Zhou, “A new class of exact penalty func-
Journal of Network and Computer Applications, vol. 41, no. 1, pp. tions and penalty algorithms,” Journal of Global Optimization,
424–440, 2014. vol. 58, no. 1, pp. 51–73, 2014.
[33] Y. Ren, J. Shen, D. Liu, J. Wang, and J.-U. Kim, “Evidential
quality preserving of electronic record in cloud storage,” Journal
of Internet Technology, vol. 17, no. 6, pp. 1125–1132, 2016.
[34] Y. Chen, C. Hao, W. Wu, and E. Wu, “Robust dense reconstruc-
tion by range merging based on confidence estimation,” Science
China Information Sciences, vol. 59, no. 9, Article ID 092103, pp.
1–11, 2016.
[35] T. Ma, Y. Zhang, J. Cao, J. Shen, M. Tang, and Y. Tian, “Abdul-
lah Al-Dhelaan, Mznah Al-Rodhaan, KDVEM: a k-degree
anonymity with Vertex and Edge Modification algorithm,”
Computing, vol. 70, no. 6, pp. 1336–1344, 2015.
[36] L. Mashayekhy, M. M. Nejad, D. Grosu, and A. V. Vasilakos,
“An online mechanism for resource allocation and pricing
in clouds,” Institute of Electrical and Electronics Engineers.
Transactions on Computers, vol. 65, no. 4, pp. 1172–1184, 2016.
[37] J. Kwak, Y. Kim, J. Lee, and S. Chong, “DREAM: Dynamic
Resource and Task Allocation for Energy Minimization in
Mobile Cloud Systems,” IEEE Journal on Selected Areas in
Communications, vol. 33, no. 12, pp. 2510–2523, 2015.
[38] A. A. Alsaffar, H. P. Pham, C.-S. Hong, E.-N. Huh, and
M. Aazam, “An Architecture of IoT Service Delegation and
Resource Allocation Based on Collaboration between Fog and
Cloud Computing,” Mobile Information Systems, vol. 2016,
Article ID 6123234, pp. 1–15, 2016.
[39] H. Xiang, M. Peng, Y. Cheng, and H.-H. Chen, “Joint mode
selection and resource allocation for downlink fog radio access
networks supported D2D,” in Proceedings of the 11th EAI Inter-
national Conference on Heterogeneous Networking for Quality,
Reliability, Security and Robustness, QSHINE 2015, pp. 177–182,
Taiwan, August 2015.
[40] S. Banerjee and J. P. Hecker, “A Multi-agent System Approach
to Load-Balancing and Resource Allocation for Distributed
Computing,” in First Complex Systems Digital Campus World E-
Conference, pp. 393–408, 2017.
[41] Y. Govindaraju and H. Duran-Limon, “A QoS and energy aware
load balancing and resource allocation framework for iaas cloud
providers,” in Proceedings of the 9th IEEE/ACM International
International Journal of

Rotating Advances in
Machinery Multimedia

The Scientific
Engineering
Journal of
Journal of

Hindawi
World Journal
Hindawi Publishing Corporation Hindawi
Sensors
Hindawi Hindawi
www.hindawi.com Volume 2018 http://www.hindawi.com
www.hindawi.com Volume 2018
2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Journal of

Control Science
and Engineering

Advances in
Civil Engineering
Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

Submit your manuscripts at


www.hindawi.com

Journal of
Journal of Electrical and Computer
Robotics
Hindawi
Engineering
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

VLSI Design
Advances in
OptoElectronics
International Journal of

International Journal of
Modelling &
Simulation
Aerospace
Hindawi Volume 2018
Navigation and
Observation
Hindawi
www.hindawi.com Volume 2018
in Engineering
Hindawi
www.hindawi.com Volume 2018
Engineering
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com www.hindawi.com Volume 2018

International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Hindawi Hindawi Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018

You might also like