Dynamic Resource Allocation
Dynamic Resource Allocation
Research Article
Dynamic Resource Allocation for Load
Balancing in Fog Environment
Xiaolong Xu ,1,2,3,4 Shucun Fu,1,2 Qing Cai,1,2 Wei Tian,1,2 Wenjie Liu ,1,2
Wanchun Dou ,3 Xingming Sun ,1,2 and Alex X. Liu1,4
1
School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China
2
Jiangsu Engineering Centre of Network Monitoring, Nanjing University of Information Science and Technology, Nanjing, China
3
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
4
Department of Computer Science and Engineering, Michigan State University, East Lansing, MI, USA
Copyright © 2018 Xiaolong Xu et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Fog computing is emerging as a powerful and popular computing paradigm to perform IoT (Internet of Things) applications, which
is an extension to the cloud computing paradigm to make it possible to execute the IoT applications in the network of edge. The
IoT applications could choose fog or cloud computing nodes for responding to the resource requirements, and load balancing is
one of the key factors to achieve resource efficiency and avoid bottlenecks, overload, and low load. However, it is still a challenge
to realize the load balance for the computing nodes in the fog environment during the execution of IoT applications. In view of
this challenge, a dynamic resource allocation method, named DRAM, for load balancing in fog environment is proposed in this
paper. Technically, a system framework for fog computing and the load-balance analysis for various types of computing nodes
are presented first. Then, a corresponding resource allocation method in the fog environment is designed through static resource
allocation and dynamic service migration to achieve the load balance for the fog computing systems. Experimental evaluation and
comparison analysis are conducted to validate the efficiency and effectiveness of DRAM.
should choose the appropriate computing nodes to host the 2.1. System Framework for Fog Computing. Fog computing is
fog services combined in the IoT applications through the a new computing paradigm, which sufficiently leverages the
design of resource allocation strategies. decentralized resources through the fog and cloud environ-
Resource allocation and resource scheduling are the key ments, to provision the computation and storage services for
technologies to manage the data centers, which contribute a the IoT applications. Fog computing extends the data process
great deal to lowering the carbon emission, improving the and data storage between the smart sensors and the cloud
resource utilization, and obtaining load balancing for the data centers. Some of the tasks from the IoT applications
data centers [12–14]. In the cloud environment, the main could be processed in the fog rather than performing all the
goal of resource allocation is to optimize the number of tasks in the cloud environment. The virtualized technology
active physical machines (PMs) and make the workloads could be employed to improve the resource usage in the fog
of the running PMs distributed in a balanced manner, to environment.
avoid bottlenecks and overloaded or low-loaded resource
usage [14–17]. In the fog environment, resource allocation Figure 1 shows a hierarchical framework for computing
becomes more complicated since the applications could be tasks from IoT applications in the fog environment. There
responded to by the computing nodes both in fog and in are four layers in this framework, that is, the IoT application
clouds. The computing nodes in the fog are distributed layer, the service layer, the fog layer, and the cloud layer. The
dispersedly in the network of edge, while the computing IoT application contains a large amount of service require-
nodes in the cloud are distributed in a centralized data ments that need to be responded to by selecting appropriate
center. The resource requirements of the IoT applications for computing nodes according to the time urgency and the
the computing nodes are various as the applications have resource amount from fog and cloud. The fog layer consists of
different demands of computing power, storage capacity, and the edge computing nodes and the intermediate computing
bandwidth. Therefore, it is necessary to undergo resource nodes. The services with the highest time urgency and less
allocation for the dynamic resource requirements of the IoT computation density can be calculated by the edge computing
applications, to achieve the goal of load balancing. nodes in the fog layer, and the less urgent tasks could choose
With the above observations, it is still a challenge to intermediate computing nodes for execution. The cloud
realize the load balance for the computing nodes in the fog layer is appropriate to hosting the loose-fitting tasks with
environment during the execution of IoT applications. In high-density computation and huge-volume storage which
view of this challenge, a dynamic resource allocation method, often demand a large amount of physical resources. The
named DRAM, for load balancing in fog environment is intermediate computing nodes could be the routers for data
proposed in this paper. Specifically, our main contributions transmission, the edge computing nodes could be the mobile
are threefold. Firstly, we present a system framework for devices or sensors, and the computing nodes in the cloud are
IoT applications in fog environment and conduct the load- the PMs.
balance analysis for various types of computing nodes. Fog computing is useful for IoT applications which
Then, a corresponding resource allocation method in the fog combine many fog services. The fog services cover all the
environment is designed through static resource allocation procedures of the data extraction, data transmission, data
and dynamic service migration to achieve the load balance for storage, and service execution from the IoT applications.
the fog computing systems. Finally, adequate experimental
analysis is conducted to verify the performance of our Definition 1 (fog service). The services requested by the IoT
proposed method. applications are available to be performed in the fog and
The rest of this paper is organized as follows. In Section 2, remote cloud data centers, denoted as 𝑆 = {𝑠1 , 𝑠2 , . . . , 𝑠𝑁},
formalized concepts and definitions are presented for load- where 𝑁 is the number of fog services generated by the IoT
balance analysis in the fog environment. Section 3 elaborates applications.
the proposed resource allocation method DRAM. Section 4
illustrates the comparison analysis and performance evalua- The PMs in the remote cloud data centers, the intermedi-
tion. Section 5 summarizes the related work, and Section 6 ate computing nodes, and the edge computing nodes in the
concludes the paper and presents the prospect for the future fog environment are all available to be leveraged to provision
work. physical resources for the fog services. Suppose there are
𝑀 computing nodes in the fog and cloud environment,
2. Preliminary Knowledge denoted as 𝑃 = {𝑝1 , 𝑝2 , . . . , 𝑝𝑀}. The virtualized technology
has been widely applied in the cloud environment, which
In this section, a fog computing framework for IoT applica- is also adaptive in the fog environment to measure the
tions is designed and the load-balance analysis is conducted resource capacity of all the computing nodes and the resource
as well. requirement of the fog services.
To facilitate dynamic resource allocation for load balanc-
ing in fog environment, formal concepts and load-balance Definition 2 (resource capacity of computing nodes). For all
analysis are presented in this section. Key notations and the computing nodes in the fog and cloud, their resource
descriptions used in this section are listed in the section capacities are quantified as the number of resource units,
named “Key Terms and Descriptions Involved in Resource and each resource unit contains various physical resources,
Scheduling in Fog Environment.” including CPU, memory, and bandwidth.
Wireless Communications and Mobile Computing 3
... ...
...... ...
... VM Instances
Cloud layer
.. .. .. Physical
...
. . . Resource
Definition 3 (resource requirement of 𝑠𝑛 ). The corresponding With the judgement of fog service distribution, the
resource requirement of the 𝑛th fog service 𝑠𝑛 could be resource utilization for the 𝑚th computing node 𝑝𝑚 at time 𝑡
quantified as the time requirements, the resource type, is calculated by
and the resource amount to perform 𝑠𝑛 , denoted as 𝑟𝑛 =
{𝑠𝑡𝑖𝑚𝑛 , 𝑑𝑢𝑡𝑖𝑚𝑛 , 𝑡𝑦𝑝𝑒𝑛 , 𝑐𝑜𝑢𝑛 }, where 𝑠𝑡𝑖𝑚𝑛 , 𝑑𝑢𝑡𝑖𝑚𝑛 , 𝑡𝑦𝑝𝑒𝑛 , and 1 𝑁 𝑚
𝑟𝑢𝑚 (𝑡) = ∑ 𝐼 (𝑡) ⋅ 𝑟𝑛 , (2)
𝑐𝑜𝑢𝑛 represent the request start time, the duration time, the 𝑐𝑚 𝑛=1 𝑛
resource type, and the requested amount of 𝑠𝑛 , respectively.
where 𝑐𝑚 is the capacity of the computing node 𝑝𝑚 .
Note that the resource requirements in this paper are According to the service distribution, the number of
measured by the amount of resource units. For example, services deployed on 𝑝𝑚 at time instant 𝑡 is calculated by
in the cloud data centers, the resource units are the VM 𝑁
instances, and the customers often rent several VM instances 𝑛𝑠𝑚 (𝑡) = ∑ 𝐼𝑛𝑚 (𝑡) . (3)
to host one application. 𝑛=1
{1, if 𝑝𝑚 is a 𝑤th type computing node, 3. A Dynamic Resource Allocation Method for
𝛽𝑤𝑚 = { (5) Load Balancing in Fog Environment
0, otherwise.
{
In this section, we propose a dynamic resource allocation
{1, if 𝑛𝑠𝑚 > 0, method, named DRAM, for load balancing in fog environ-
𝑓𝑚 (𝑡) = { (6) ment. Our method aims to achieve high load balancing for
0, otherwise.
{ all the types of computing nodes in the fog and the cloud
platforms.
The resource utilization for the computing nodes with
𝑤th type at time 𝑡 is calculated by
3.1. Method Overview. Our method consists of four main
1 𝑀 𝑁 steps, that is, fog service partition, spare space detection for
𝑅𝑈𝑤 (𝑡) = ∑ ∑ 𝐼𝑚 (𝑡) ⋅ 𝑟𝑢𝑚 (𝑡) ⋅ 𝑓𝑚 (𝑡) . (7) computing nodes, static resource allocation for fog service
𝑎𝑤 (𝑡) 𝑚=1 𝑛=1 𝑛 subset, and the load-balance driven global resource alloca-
tion, as shown in the section named “Specification of our
Definition 4 (variance of load balance of 𝑝𝑚 at time instant Proposed Resource Allocation Method for Load Balancing in
𝑡). The load-balance value is measured by the variance of the Fog Environment.” In this method, Step 1 is the preprocess
resource utilization. The variance value for 𝑝𝑚 is calculated procedure, Step 2 is employed to detect the resource usage
by for Steps 3 and 4, and Step 3 is designed for static resource
2 allocation for the fog services in the same subset and it
𝑊
𝑚 provides the primary resource provision strategies for Step 4.
𝑙𝑏𝑚 (𝑡) = (𝑟𝑢𝑚 (𝑡) − ∑ 𝑅𝑈𝑤 (𝑡) ⋅ 𝛽𝑤 ) . (8)
𝑤=1
Finally, Step 4 is a global resource allocation method to realize
dynamic load balance.
Then, the average variance value for all the type 𝑤
computing nodes is calculated by Specification of Our Proposed Resource Allocation Method for
Load Balancing in Fog Environment
1 𝑀 𝑁 𝑚
𝐿𝐵𝑤 (𝑡) = ∑ ∑ 𝐼 (𝑡) ⋅ 𝑙𝑏𝑚 (𝑡) ⋅ 𝑓𝑚 (𝑡) . (9) Step 1 (fog service partition). There are different types of
𝑎𝑤 (𝑡) 𝑚=1 𝑛=1 𝑛 computing nodes for the performance of fog services. To
efficiently provision resources, the fog services are classified
For the execution period [𝑇0 , 𝑇] in the fog and cloud as several sets based on the resource requirements of node
environment, the load-balance variance could be calculated type. Furthermore, these sets are divided into multiple sub-
by sets according to the request start time.
𝑇
1 Step 2 (spare space detection for computing nodes). To judge
𝐿𝐵𝑤 = ∫ 𝐿𝐵𝑤 (𝑡) 𝑑𝑡. (10)
𝑇 − 𝑇0 𝑇0 whether a computing node is portable to host the fog service,
it is necessary to detect the spare space of all the computing
With these observations, the problem of minimizing the nodes. We analyze the employed resource units through the
variance of load balance can be formulated as follows: analysis of occupation records, and then the spare space of
the computing nodes could be obtained.
min 𝐿𝐵𝑤 , ∀𝑤 = 1, . . . , 𝑊 (11)
Step 3 (static resource allocation for fog service subset).
𝑀
For the fog services in the same service subset, the proper
s.t. 𝑎𝑤 (𝑡) ≤ ∑ 𝛽𝑤𝑚 , (12)
𝑚=1
computing nodes are identified to host these services. When
allocating resource units for a fog service, the computing
𝑁 𝑀 node with the least and enough spare space is selected.
∑ 𝑟𝑛 ≤ ∑ 𝑐𝑚 , (13) Besides, some workloads from the computing nodes with
𝑛=1 𝑚=1
higher resource usage are migrated to the computing nodes
𝑁 with low resource usage.
∑ 𝐼𝑛𝑚 𝑟𝑛 ≤ 𝑐𝑚 , (14)
𝑛=1 Step 4 (load-balance driven global resource allocation). For
all the fog service subsets, we could find the initialized
where ∑𝑁 𝑛=1 𝑟𝑛 in formula (13) represents the resource require- resource allocation strategies in Step 4, and then the dynamic
ments for all services, ∑𝑀𝑚=1 𝑐𝑚 in formula (14) represents the
resource allocation adjustment is conducted at the competi-
capacity of all computing nodes, and ∑𝑁 𝑚
𝑛=1 𝐼𝑛 𝑟𝑛 in formula
tion moments of the fog services to achieve the global load
(14) represents the resource requirements for all services balance during the execution period of the fog services.
allocated to the type 𝑚 computing node.
From (11) to (14), we can find that the final objective 3.2. Fog Service Partition. The fog services from different
solved in this paper is an optimization problem with multiple IoT applications have different requirements of computing
constraints [18–20]. resources; that is, the fog services need to choose different
Wireless Communications and Mobile Computing 5
3.3. Spare Space Detection for Computing Nodes. The fog Definition 6 (spare space of 𝑝𝑚 ). The spare space of the
services need to be put on the computing nodes; thus, those computing node 𝑝𝑚 is defined as the idle amount of resource
computing nodes with spare space should be identified. For units, which is measured by the difference value of the
all the computing nodes, the allocation records are employed capacity and the occupied amount of resource units on 𝑝𝑚 .
6 Wireless Communications and Mobile Computing
𝑝𝑚 first (Lines (1) to (8)), and then the spare space could be
u1,6 detected (Line (8)).
u1,5 r5
3.4. Static Resource Allocation for Fog Service Subset. Based
on the fog service partition in Section 3.1 and spare space
Resource Units
u1,4 r5
detection for computing nodes in Section 3.2, the fog services
u1,3 r4 that need to be processed and the available computing
resources for fog service execution are identified, which are
u1,2 r2 r3 all beneficial for resource allocation.
In the fog environment, the fog services need to be
u1,1 r1 r3 r6
responded to by the computing nodes, and the time require-
ments also should be presented when allocating the resources
0 0.5 1.0 1.5 2.0 2.5 3.0 to the fog services. In this section, we define the resource
Time instants allocation records to reserve the allocation history about
Figure 3: An example of spare space detection with 6 occupation
resource provisioning for the fog services.
records (i.e., 𝑟1 ∼𝑟6 ).
Definition 7 (resource allocation record for 𝑠𝑛 ). The resource
allocation record for 𝑠𝑛 consists of the node type, the number
of resource units, the start time, and the desired duration
time, which is denoted as 𝑎𝑛 = (𝑛𝑡𝑛 , 𝑛𝑢𝑚𝑛 , 𝑠𝑡𝑛 , 𝑑𝑡𝑛 ), where
Input: The occupation record for the computing node 𝑝𝑚
𝑛𝑡𝑛 , 𝑛𝑢𝑚𝑛 , 𝑠𝑡𝑛 , and 𝑑𝑡𝑛 are the node type, the amount
Output: The spare space for 𝑝𝑚
(1) 𝑐𝑜𝑢 = 0 of resource units, the service start time, and the resource
(2) for 𝑖 = 1 to |𝑟𝑠𝑚 | do occupation time for 𝑠𝑛 , respectively.
(3) 𝑐𝑡𝑚,𝑖 = 𝑠𝑡𝑚,𝑖 + 𝑑𝑡𝑚,𝑖
//𝑐𝑡𝑚,𝑖 is the finish time for the occupation of resource units The fog services in the same fog service subset have the
(4) if 𝑡 ≥ 𝑠𝑡𝑚,𝑖 && 𝑡 < 𝑐𝑡𝑚,𝑖 then same required node type and start time. When allocating
//𝑡 is the request time for statistics resource units for the fog service subset, each fog service
(5) 𝑐𝑜𝑢 = 𝑐𝑜𝑢 + |𝑢𝑠𝑚,𝑖 | should find a portable computing node to host it. Thus, we
(6) end if need to find the available nodes first. The computing nodes
(7) end for with the requested type, which have spare space, which could
(8) 𝑐𝑜𝑢 = 𝑐𝑚 − 𝑐𝑜𝑢 be detected by Algorithm 2, are chosen as the candidate
(9) Return cou resources to be provided for the fog services in the subset.
To achieve the load balancing of the computing nodes, we
Algorithm 2: Spare space detection for computing nodes. try to achieve high resource usage of the employed computing
nodes. The problem of static resource allocation is like bin
packing problem, which is NP-hard. Here, we leverage the
idea of Best Fit Decreasing to realize the process of computing
The spare space of the computing node 𝑝𝑚 could be node matching for the fog services. Before resource alloca-
detected from the analysis of occupation records. In these tion, the fog services are sorted in the decreasing order of
records, if occupation start time is less than the statistic time requested resource amount. The service with more required
instant for checking the PM status and the occupation finish resource units will be processed first. And it chooses the
time is over the statistic time, the relevant resource units computing node with the least and enough spare space for
combined in the occupation records could be obtained. With hosting the service.
these acquired resource units and the resource capacity of 𝑝𝑚 , For example, there are 3 computing nodes 𝑃1 , 𝑃2 , and
the spare space could be finally detected. 𝑃3 , and the spare spaces of these 3 computing nodes are 4,
For example, there are 6 occupation records for the 6, and 9, respectively, as shown in Figure 4. There are two
computing node 𝑝1 , that is, 𝑟1 : (1, 0, 0.5, {𝑢1,1 }), 𝑟2 : (2, 0.3, fog services in the same subset, that is, 𝑆1 and 𝑆2 , and the
0.7, {𝑢1,2 }), 𝑟3 : (3, 1.3, 1, {𝑢1,1 , 𝑢1,2 }), 𝑟4 : (4, 1.4, 1.2, {𝑢1,3 }), requested resource amounts of these two services are 2 and
𝑟5 : (5, 1.4, 0.8, {𝑢1,4 , 𝑢1,5 }), and 𝑟6 : (6, 2.5, 0.5, {𝑢1,1 }), as 6. When conducting resource allocation for 𝑆1 and 𝑆2 , 𝑆2 has
shown in Figure 3. If the statistic instant for spare space more resource requirements, and thus 𝑆2 is processed first.
detection is 1.5, the identified occupation records within the After resource allocation, 𝑆2 chose 𝑃3 for hosting and 𝑆1 chose
statistic time are 𝑟3 , 𝑟4 , and 𝑟5 . Based on the analysis of 𝑃1 for hosting.
the occupation records, the employed resource units at this The above allocation may lead to the unbalanced dis-
moment are 𝑢1,1 , 𝑢1,2 , 𝑢1,3 , 𝑢1,4 , and 𝑢1,5 . Suppose the capacity tribution of the workloads of some computing nodes. In
of 𝑝1 is 6; then, the spare space of 𝑝1 at time instant 1.5 is 1. this section, a threshold 𝜌 is employed to judge whether the
Algorithm 2 specifies the key idea of spare space detection computing node is in low resource utilization. If a computing
for computing nodes. In Algorithm 2, the input and the node is in low resource utilization and there are no other
output are the occupation records of the computing node 𝑝𝑚 computing nodes that could host the workloads in this
and the spare space for 𝑝𝑚 . According to Definition 7, we computing node, we choose to move some workloads to this
need to calculate the occupation amount of resource units on computing node to improve the load balance.
Wireless Communications and Mobile Computing 7
0
(2) Performance Evaluation on Resource Utilization. The
500 1000 1500 2000 resource utilization is a key factor to decide the load-balance
Number of fog services for resource allocation variance; thus we evaluate this value to discuss the resource
usage achieved by FF, BF, FFD, BFD, and DRAM with
Edge Computing Node different datasets. The resource utilization is referenced to
Intermediate Computing Node
PMs in Cloud
the resource usage of the resource units on the computing
nodes. Figure 8 shows the comparison of average resource
Figure 5: Number of fog services for the 3 types of computing nodes utilization by FF, BF, FFD, BFD, and DRAM with different
with different datasets. datasets. It is intuitive from Figure 8 that DRAM could obtain
better resource utilization than FF, BF, FFD, and BFD, since
DRAM is a dynamic and adaptive method which could adjust
the load distribution during the fog service execution.
140
Similar to the evaluation on the employed amount of
Number of employed computing nodes
25 45
5 10
5
0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node
FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(a) Number of fog services = 500 (b) Number of fog services = 1000
50 50
Number of employed computing nodes
FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(c) Number of fog services = 1500 (d) Number of fog services = 2000
Figure 7: Comparison of the number of the employed computing nodes with different types by FF, BF, FFD, BFD, and DRAM using different
datasets.
100
90
Average resource utilization (%)
80
70
60
50
40
30
20
10
0
500 1000 1500 2000
Number of fog services for resource allocation
FF BFD
BF DRAM
FFD
Figure 8: Comparison of average resource utilization by FF, BF, FFD, BFD, and DRAM with different datasets.
Wireless Communications and Mobile Computing 11
100 100
90 90
80 80
Resource Utilization (%)
FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(a) Number of fog services = 500 (b) Number of fog services = 1000
100 100
90 90
80 80
Resource Utilization (%)
Resource Utilization (%)
70 70
60 60
50 50
40 40
30 30
20 20
10 10
0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node
FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(c) Number of fog services = 1500 (d) Number of fog services = 2000
Figure 9: Comparison of resource utilization for different types of computing nodes by FF, BF, FFD, BFD, and DRAM with different datasets.
×10−2
6
Average load balance variance
0
500 1000 1500 2000
Number of fog services for resource allocation
FF BFD
BF DRAM
FFD
Figure 10: Comparison of average load-balance variance by FF, BF, FFD, BFD, and DRAM with different datasets.
12 Wireless Communications and Mobile Computing
−2
6 ×10
−2
6 ×10
5 5
Load Balance Variance
3 3
2 2
1 1
0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node
FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(a) Number of fog services = 500 (b) Number of fog services = 1000
−2
×10 ×10−2
6 6
5 5
Load Balance Variance
4 4
3 3
2 2
1 1
0 0
Edge Computing Intermediate PMs in Cloud Edge Computing Intermediate PMs in Cloud
Node Computing Node Node Computing Node
FF BFD FF BFD
BF DRAM BF DRAM
FFD FFD
(c) Number of fog services = 1500 (d) Number of fog services = 2000
Figure 11: Comparison of load-balance variance values for different types of computing nodes by FF, BF, FFD, BFD, and DRAM with different
datasets.
data storage and processing usually benefit from the cloud [27] compared the cloud computing on computing perfor-
computing which provides scalable and elastic resources for mance with fog computing in the 5G network, reflecting
executing the IoT applications [21–26]. In the ever-expanding the superiority of fog computing. Akrivopoulos et al. [28]
data volume, cloud computing is difficult to provide efficient, presented a technology to respond to the combination of IoT
low-latency computing services, and fog computing is pro- applications and the fog computing and introduced the use of
posed to complement the above shortage of cloud computing fog computing in an automated medical monitoring platform
[1, 2]. to improve the medical work for the patients. Arfat et al. [29]
Compared to the remote cloud computing center, fog not only unprecedentedly proposed the integration of mobile
computing is closer to the Internet of Things devices and applications, big data analysis, and fog computing, but also
sensors. Fog computing can quickly solve lightweight tasks introduced Google Maps as an example, showing the system
with fast response. In the era of big data, with cloud comput- of information feedback diversification. Taneja and Davy [30]
ing expansion, fog computing is widely used in the medical, studied the computational efficiency in the fog environment
transportation, and communication fields, to name a few and constructed a resource-aware placement.
[21, 27–30]. Generally, resource allocation refers to the allocation
Hu et al. [21] designed a fog calculation framework in of specific, limited resources and effective management to
detail and compared it with the traditional cloud computing, achieve the optimal use of resources. The original intention of
and a practical application case in the fog computing envi- cloud computing is to allocate network resources on demand,
ronment was put forward. Similarly, Kitanov and Janevski so that it is the same as the use of water and electricity billing
Wireless Communications and Mobile Computing 13
[31, 32]. In the cloud computing, resource allocation method environment, the IoT applications are performed by the edge
can effectively help to achieve the goal of high resource usage computing nodes and the intermediate computing nodes in
and energy saving for centralized resource management for the fog, as well as the physical machines in the cloud plat-
different types of applications [4, 33–35]. Fog computing, forms. To achieve the dynamic load balancing for each type
as an extension of cloud computing paradigm, also needs of computing node in the fog and cloud, a dynamic resource
to conduct resource allocation to achieve high-efficiency allocation method, named DRAM, for load balancing has
resource usage. been developed in this paper. Firstly, a system framework
Mashayekhy et al. [36] proposed an auction-based online in fog computing was presented and load balancing for the
mechanism; it can access in real time the actual needs of computing nodes is analyzed accordingly. Then, the DRAM
users and the allocation of appropriate resources to the user method has been implemented based on the static resource
price. Kwak et al. [37] developed a DREAM algorithm for allocation and dynamic resource scheduling for fog services.
complex tasks in mobile devices, saving 35% of total energy As a result, the experimental evaluations and comparison
and managing network resources. To address the challenges analysis were carried out to verify the validity of our proposed
of high latency and resource shortage in clouds, Alsaffar et method.
al. [38] proposed the resource management framework, col- For future work, we try to analyze the negative impact
laborating the cloud computing and the fog computing, and of the service migration, including the traffic for different
then they optimized the resource allocation in fog computing. types of computing nodes, the cost for service migration,
Xiang et al. [39] designed a RAN (F-RAN) architecture based the performance degradation for the service migration, and
on atomization calculation, which effectively achieved the the data transmission cost. Furthermore, we will design a
high resource usage and could coordinate the global resource corresponding method to balance the negative effects and the
scheduling. positive impacts for service migration.
Load balancing is an effective factor to determine the
resource allocation strategy. For multiple computing tasks, Key Terms and Descriptions Involved in
load balancing could promote the resource managers to Resource Scheduling in Fog Environment
assign these tasks to multiple computing nodes for execution.
The realization of load balancing not only can save the cost of 𝑀: The number of computing nodes
hardware facilities but also can improve resource efficiency. 𝑃: The set of computing nodes,
Banerjee and Hecker [40] proposed a distributed resource 𝑃 = {𝑝1 , 𝑝2 , . . . , 𝑝𝑀}
allocation protocol algorithm to realize load balancing in 𝑁: The number of services
a large-scale distributed network; as a result, compared to 𝑆: The set of services, 𝑆 = {𝑠1 , 𝑠2 , . . . , 𝑠𝑁}
the FIFO, the response time and resource utilization could 𝑝𝑚 : The 𝑚th (1 ≤ 𝑚 ≤ 𝑀) computing node in
be greatly improved. Govindaraju and Duran-Limon [41] 𝑃
designed a method based on the lifecycle-related Service 𝑠𝑛 : The 𝑛th (1 ≤ 𝑛 ≤ 𝑁) service in 𝑆
Level Agreement (SLA) parameters of the virtual machines 𝑐𝑚 : The resource capacity of the 𝑚th
in cloud environment to address resource utilization and cost computing node 𝑝𝑚
issues. Evolution algorithms are proved to be powerful to 𝑟𝑛 : The set of resource requirements of the
solve the multiobjective problem, which could be leveraged 𝑛th service 𝑠𝑛
in the resource scheduling in the fog computing [42]. Jeyakr- 𝑊: The number of types of processing nodes
ishnan and Sengottuvelan [43] developed a new algorithm, 𝛽𝑤𝑚 : A flag to judge whether 𝑝𝑚 is a type 𝑤 of
while saving operating costs, while maximizing the use of computing nodes
resources, in the balanced scheduling compared to SA, PSO, 𝑟𝑢𝑚 (𝑡): The resource utilization for 𝑝𝑚 at time 𝑡
and ADS being more outstanding. 𝑅𝑈𝑤 (𝑡): The resource utilization for the 𝑤th type
For the load balancing maximization problem solved in computing nodes at time 𝑡
this paper, the traditional operations research is proved to be 𝑙𝑏𝑚 (𝑡): The load-balance variance for 𝑝𝑚 at time 𝑡
efficient in optimization problem with constraints [44, 45]. 𝐿𝐵𝑤 (𝑡): The load-balance variance for the 𝑤th type
The game theory is also efficient for the resource allocation computing nodes at time 𝑡
with resource competition, and the Nash Equilibria are often 𝐿𝐵𝑤 : The average load-balance variance for the
needed to be verified first [46, 47]. 𝑤th type computing nodes.
To the best of our knowledge, there are few studies
focusing on the resource allocation of fog services in the fog Conflicts of Interest
environment which aims to realize the load balancing for the
computing nodes in both fog and cloud. The authors declare that they have no conflicts of interest.
BE2015154 and BE2016120, and the Natural Science Foun- [12] X. Xu, X. Zhang, M. Khan, W. Dou, S. Xue, and S. Yu,
dation of Jiangsu Province (Grant no. BK20171458). Besides, “A balanced virtual machine scheduling method for energy-
this work is also supported by the Startup Foundation for performance trade-offs in cyber-physical cloud systems,” Future
Introducing Talent of NUIST, the Open Project from State Generation Computer Systems, 2017.
Key Laboratory for Novel Software Technology, Nanjing [13] L. Yu, L. Chen, Z. Cai, H. Shen, Y. Liang, and Y. Pan,
University, under Grant no. KFKT2017B04, the Priority Aca- “Stochastic Load Balancing for Virtual Resource Management
demic Program Development of Jiangsu Higher Education in Datacenters,” IEEE Transactions on Cloud Computing, pp. 1–
Institutions (PAPD) fund, Jiangsu Collaborative Innova- 14, 2016.
tion Center on Atmospheric Environment and Equipment [14] Y. Sahu, R. K. Pateriya, and R. K. Gupta, “Cloud server opti-
Technology (CICAEET), and the project “Six Talent Peaks mization with load balancing and green computing techniques
using dynamic compare and balance algorithm,” in Proceedings
Project in Jiangsu Province” under Grant no. XYDXXJS-040.
of the 5th International Conference on Computational Intelligence
Special thanks are due to Dou Ruihan, Nanjing Jinling High and Communication Networks, CICN 2013, pp. 527–531, India,
School, Nanjing, China, for his intelligent contribution to our September 2013.
algorithm discussion and experiment development. [15] G. Soni and M. Kalra, “A novel approach for load balancing
in cloud data center,” in Proceedings of the 2014 4th IEEE
References International Advance Computing Conference, IACC 2014, pp.
807–812, India, February 2014.
[1] Y. Kong, M. Zhang, and D. Ye, “A belief propagation-based [16] X. Xu, W. Dou, X. Zhang, and J. Chen, “EnReal: An Energy-
method for task allocation in open and dynamic cloud environ- Aware Resource Allocation Method for Scientific Workflow
ments,” Knowledge-Based Systems, vol. 115, pp. 123–132, 2017. Executions in Cloud Environment,” IEEE Transactions on Cloud
[2] S. Sarkar, S. Chatterjee, and S. Misra, “Assessment of the Computing, vol. 4, no. 2, pp. 166–179, 2016.
Suitability of Fog Computing in the Context of Internet of [17] S. Li and Y. Zhang, “On-line scheduling on parallel machines
Things,” IEEE Transactions on Cloud Computing, vol. 6, no. 1, to minimize the makespan,” Journal of Systems Science &
pp. 46–59, 2015. Complexity, vol. 29, no. 2, pp. 472–477, 2016.
[3] K. Peng, R. Lin, B. Huang, H. Zou, and F. Yang, “Link impor- [18] G. Wang, X. X. Huang, and J. Zhang, “Levitin-Polyak well-
tance evaluation of data center network based on maximum posedness in generalized equilibrium problems with functional
flow,” Journal of Internet Technology, vol. 18, no. 1, pp. 23–31, 2017. constraints,” Pacific Journal of Optimization. An International
Journal, vol. 6, no. 2, pp. 441–453, 2010.
[4] E. Luo, M. Z. Bhuiyan, G. Wang, M. A. Rahman, J. Wu, and
[19] B. Qu and J. Zhao, “Methods for solving generalized Nash
M. Atiquzzaman, “PrivacyProtector: Privacy-Protected Patient
equilibrium,” Journal of Applied Mathematics, vol. 2013, Article
Data Collection in IoT-Based Healthcare Systems,” IEEE Com-
ID 762165, 2013.
munications Magazine, vol. 56, no. 2, pp. 163–168, 2018.
[20] S. Lian and Y. Duan, “Smoothing of the lower-order exact
[5] P. Li, S. Zhao, and R. Zhang, “A cluster analysis selection strategy penalty function for inequality constrained optimization,” Jour-
for supersaturated designs,” Computational Statistics & Data nal of Inequalities and Applications, Paper No. 185, 12 pages,
Analysis, vol. 54, no. 6, pp. 1605–1612, 2010. 2016.
[6] G.-L. Tian, M. Wang, and L. Song, “Variable selection in the [21] P. Hu, S. Dhelim, H. Ning, and T. Qiu, “Survey on fog com-
high-dimensional continuous generalized linear model with puting: architecture, key technologies, applications and open
current status data,” Journal of Applied Statistics, vol. 41, no. 3, issues,” Journal of Network and Computer Applications, vol. 98,
pp. 467–483, 2014. pp. 27–42, 2017.
[7] S. Wang, T. Lei, L. Zhang, C.-H. Hsu, and F. Yang, “Offloading [22] A. V. Dastjerdi and R. Buyya, “Fog Computing: Helping the
mobile data traffic for QoS-aware service provision in vehicular Internet of Things Realize Its Potential,” The Computer Journal,
cyber-physical systems,” Future Generation Computer Systems, vol. 49, no. 8, Article ID 7543455, pp. 112–116, 2016.
vol. 61, pp. 118–127, 2016. [23] J. Shen, T. Zhou, D. He, Y. Zhang, X. Sun, and Y. Xiang,
“Block design-based key agreement for group data sharing in
[8] S. Yi, C. Li, and Q. Li, “A survey of fog computing: concepts,
cloud computing,” IEEE Transactions on Dependable and Secure
applications and issues,” in Proceedings of the Workshop on
Computing, vol. PP, no. 99, 2017.
Mobile Big Data (Mobidata ’15), pp. 37–42, ACM, Hangzhou,
China, June 2015. [24] Z. Xia, X. Wang, L. Zhang, Z. Qin, X. Sun, and K. Ren, “A
privacy-preserving and copy-deterrence content-based image
[9] X. L. Xu, X. Zhao, F. Ruan et al., “Data placement for privacy- retrieval scheme in cloud computing,” IEEE Transactions on
aware applications over big data in hybrid clouds,” Security Information Forensics and Security, vol. 11, no. 11, pp. 2594–2608,
and Communication Networks, vol. 2017, Article ID 2376484, 15 2016.
pages, 2017.
[25] J. Shen, D. Liu, J. Shen, Q. Liu, and X. Sun, “A secure cloud-
[10] X. Xu, W. Dou, X. Zhang, C. Hu, and J. Chen, “A traffic hotline assisted urban data sharing framework for ubiquitous-cities,”
discovery method over cloud of things using big taxi GPS data,” Pervasive and Mobile Computing, vol. 41, pp. 219–230, 2017.
Software: Practice and Experience, vol. 47, no. 3, pp. 361–377, [26] Z. Fu, X. Wu, C. Guan, X. Sun, and K. Ren, “Toward efficient
2017. multi-keyword fuzzy search over encrypted outsourced data
[11] F. Bonomi, R. Milito, P. Natarajan, and J. Zhu, “Fog computing: with accuracy improvement,” IEEE Transactions on Information
A platform for internet of things and analytics,” in Big Data and Forensics and Security, vol. 11, no. 12, pp. 2706–2716, 2016.
Internet of Things: A Roadmap for Smart Environments, vol. 546 [27] S. Kitanov and T. Janevski, “Energy efficiency of Fog Computing
of Studies in Computational Intelligence, pp. 169–186, 2014. and Networking services in 5G networks,” in Proceedings of
Wireless Communications and Mobile Computing 15
the 17th IEEE International Conference on Smart Technologies, Conference on Utility and Cloud Computing, UCC 2016, pp. 410–
EUROCON 2017, pp. 491–494, Macedonia, July 2017. 415, China, December 2016.
[28] O. Akrivopoulos, I. Chatzigiannakis, C. Tselios, and A. Anto- [42] Y. Yuan, H. Xu, B. Wang, and X. Yao, “A new dominance
niou, “On the Deployment of Healthcare Applications over relation-based evolutionary algorithm for many-objective opti-
Fog Computing Infrastructure,” in Proceedings of the 41st mization,” IEEE Transactions on Evolutionary Computation, vol.
IEEE Annual Computer Software and Applications Conference 20, no. 1, pp. 16–37, 2016.
Workshops, COMPSAC 2017, pp. 288–293, Italy, July 2017. [43] V. Jeyakrishnan and P. Sengottuvelan, “A Hybrid Strategy for
[29] Y. Arfat, M. Aqib, R. Mehmood et al., “Enabling Smarter Resource Allocation and Load Balancing in Virtualized Data
Societies through Mobile Big Data Fogs and Clouds,” in Centers Using BSO Algorithms,” Wireless Personal Communi-
Proceedings of the International Workshop on Smart Cities cations, vol. 94, no. 4, pp. 2363–2375, 2017.
Systems Engineering (SCE 2017), vol. 109, pp. 1128–1133, Procedia [44] H. Wu, Y. Ren, and F. Hu, “Continuous dependence property
Computer Science, Portugal, May 2017. of BSDE with constraints,” Applied Mathematics Letters, vol. 45,
[30] M. Taneja and A. Davy, “Resource Aware Placement of Data pp. 41–46, 2015.
Analytics Platform in Fog Computing,” in Proceedings of the 2nd [45] Y. Wang, X. Sun, and F. Meng, “On the conditional and
International Conference on Cloud Forward: From Distributed to partial trade credit policy with capital constraints: a Stackelberg
Complete Computing, CF 2016, pp. 153–156, Procedia Computer model,” Applied Mathematical Modelling: Simulation and Com-
Science, Spain, October 2016. putation for Engineering and Environmental Systems, vol. 40, no.
[31] N. Fernando, S. W. Loke, and W. Rahayu, “Mobile cloud 1, pp. 1–18, 2016.
computing: a survey,” Future Generation Computer Systems, vol. [46] J. Zhang, B. Qu, and N. Xiu, “Some projection-like methods for
29, no. 1, pp. 84–106, 2013. the generalized Nash equilibria,” Computational optimization
[32] S. S. Manvi and G. Krishna Shyam, “Resource management for and applications, vol. 45, no. 1, pp. 89–109, 2010.
Infrastructure as a Service (IaaS) in cloud computing: a survey,” [47] C. Wang, C. Ma, and J. Zhou, “A new class of exact penalty func-
Journal of Network and Computer Applications, vol. 41, no. 1, pp. tions and penalty algorithms,” Journal of Global Optimization,
424–440, 2014. vol. 58, no. 1, pp. 51–73, 2014.
[33] Y. Ren, J. Shen, D. Liu, J. Wang, and J.-U. Kim, “Evidential
quality preserving of electronic record in cloud storage,” Journal
of Internet Technology, vol. 17, no. 6, pp. 1125–1132, 2016.
[34] Y. Chen, C. Hao, W. Wu, and E. Wu, “Robust dense reconstruc-
tion by range merging based on confidence estimation,” Science
China Information Sciences, vol. 59, no. 9, Article ID 092103, pp.
1–11, 2016.
[35] T. Ma, Y. Zhang, J. Cao, J. Shen, M. Tang, and Y. Tian, “Abdul-
lah Al-Dhelaan, Mznah Al-Rodhaan, KDVEM: a k-degree
anonymity with Vertex and Edge Modification algorithm,”
Computing, vol. 70, no. 6, pp. 1336–1344, 2015.
[36] L. Mashayekhy, M. M. Nejad, D. Grosu, and A. V. Vasilakos,
“An online mechanism for resource allocation and pricing
in clouds,” Institute of Electrical and Electronics Engineers.
Transactions on Computers, vol. 65, no. 4, pp. 1172–1184, 2016.
[37] J. Kwak, Y. Kim, J. Lee, and S. Chong, “DREAM: Dynamic
Resource and Task Allocation for Energy Minimization in
Mobile Cloud Systems,” IEEE Journal on Selected Areas in
Communications, vol. 33, no. 12, pp. 2510–2523, 2015.
[38] A. A. Alsaffar, H. P. Pham, C.-S. Hong, E.-N. Huh, and
M. Aazam, “An Architecture of IoT Service Delegation and
Resource Allocation Based on Collaboration between Fog and
Cloud Computing,” Mobile Information Systems, vol. 2016,
Article ID 6123234, pp. 1–15, 2016.
[39] H. Xiang, M. Peng, Y. Cheng, and H.-H. Chen, “Joint mode
selection and resource allocation for downlink fog radio access
networks supported D2D,” in Proceedings of the 11th EAI Inter-
national Conference on Heterogeneous Networking for Quality,
Reliability, Security and Robustness, QSHINE 2015, pp. 177–182,
Taiwan, August 2015.
[40] S. Banerjee and J. P. Hecker, “A Multi-agent System Approach
to Load-Balancing and Resource Allocation for Distributed
Computing,” in First Complex Systems Digital Campus World E-
Conference, pp. 393–408, 2017.
[41] Y. Govindaraju and H. Duran-Limon, “A QoS and energy aware
load balancing and resource allocation framework for iaas cloud
providers,” in Proceedings of the 9th IEEE/ACM International
International Journal of
Rotating Advances in
Machinery Multimedia
The Scientific
Engineering
Journal of
Journal of
Hindawi
World Journal
Hindawi Publishing Corporation Hindawi
Sensors
Hindawi Hindawi
www.hindawi.com Volume 2018 http://www.hindawi.com
www.hindawi.com Volume 2018
2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Journal of
Control Science
and Engineering
Advances in
Civil Engineering
Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Journal of
Journal of Electrical and Computer
Robotics
Hindawi
Engineering
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
VLSI Design
Advances in
OptoElectronics
International Journal of
International Journal of
Modelling &
Simulation
Aerospace
Hindawi Volume 2018
Navigation and
Observation
Hindawi
www.hindawi.com Volume 2018
in Engineering
Hindawi
www.hindawi.com Volume 2018
Engineering
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com www.hindawi.com Volume 2018
International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Hindawi Hindawi Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018