Joint Multi-User Computation Offloading and Data Caching For Hybrid Mobile Cloud Edge Computing
Joint Multi-User Computation Offloading and Data Caching For Hybrid Mobile Cloud Edge Computing
Abstract—In this paper, we investigate a hybrid mobile remote healthcare systems, surveillance and security monitoring
cloud/edge computing system with coexistence of centralized cloud systems, and assisted driving call for stringent requirements on
and mobile edge computing, which enables computation offloading ultra-low latency and high reliability [1], [2]. As a result, the
and data caching to improve the performance of users. Computa-
tion offloading and data caching decisions are jointly optimized to volume of data traffic and energy consumption used to compute
minimize the total execution delay at the mobile user side, while and deliver the data increase astronomically, imposing a heavy
satisfying the constrains in terms of the maximum tolerable energy processing burden to the IoT devices. Although offloading the
consumption of each user, the computation capability of each MEC computing tasks to the cloud servers is a promising approach
server, and the cache capacity of each access point (AP). The to address this challenge, it may experience higher energy
formulated problem is non-convex and challenging because of the
highly coupled decision variables. To address such an untractable consumption and longer latency in networks. By deploying the
problem, we first transform the original problem into an equivalent mobile edge computing (MEC) servers at the access points
convex one by McCormick envelopes and introducing auxiliary (APs), computation-intensive tasks can be offloaded to the MEC
variables. To the end, we propose a distributed algorithm based servers instead of the remote cloud servers. MEC serves as a
on the alternating direction method of multipliers (ADMM), which potential solution to supplement cloud computing has brought
can achieve near optimal computation offloading and data caching
decisions. The proposed algorithm has lower computational com- about great concern [3], [4]. However, MEC ignored the huge
plexity compared to the centralized algorithm. Simulation results computation resources in the cloud servers. Therefore, making
are presented to verify that the proposed algorithm can effectively full use of powerful resources at both cloud and edge in the
reduce computing delay for end users while ensuring the perfor- hybrid cloud/edge computation systems is particularly necessary
mance of each user. and important.
Index Terms—Hybrid mobile cloud/edge computing, multi- At the same time, caching also shows attractive advantages
user computation offloading, data caching, McCormick envelopes, in dealing with the proliferation of mobile data traffic [5]–[10].
ADMM. Moreover, considerable research efforts have been dedicated to
investigating the advantages of joint design of computation of-
I. INTRODUCTION floading and caching in MEC systems, which alleviates the load
of the backhaul links and increases the computation capacity
RIVEN by the roaring Internet of Things (IoT), the ex-
D plosive growth of various mobile applications such as
provided to users [11]–[15]. Particularly, authors in [11], [12]
investigated joint design of computation offloading and caching
strategy in a single-cell network. However, these works seldom
Manuscript received January 7, 2019; revised April 18, 2019 and July 1, 2019;
accepted September 8, 2019. Date of publication September 19, 2019; date of considered the effects of users’ execution strategy in multi-user
current version November 12, 2019. This work was supported in part by the Na- computation offloading systems, and the relationship between
tional Natural Science Foundation of China under Grants 61421001, 61871032,
and 61801505, in part by China National S&T Major Projects 2017ZX03001017 computation offloading and the related input data. Although [13]
and MCM20170101, in part by 111 Project of China under Grant B14010, in part proposed an integrated framework to dynamically orchestrate
by Beijing Natural Science Foundation under Grants 4152047 and L182038, the networking, caching and computing resources, this work
in part by the Jiangsu Provincial Nature Science Foundation of China under
Grant BK20170755, and in part by the National Postdoctoral Program for only focused on the mobile scenarios, and it was difficult to
Innovative Talents of China under Grant BX201700109. The review of this cache the associated input computation data in a fixed place.
article was coordinated by Prof. J.-M. Chung. (Corresponding authors: Zesong Meanwhile, [14] studied the joint content caching and comput-
Fei; Jianchao Zheng.)
X. Yang and Z. Fei are with the School of Information and Elec- ing in software defined networks, yet without considering the
tronics, Beijing Institute of Technology, Beijing 100081, China (e-mail: computation offloading and doing the corresponding operations
[email protected]; [email protected]). at the computation inputting data. Furthermore, their proposed
J. Zheng is with the National Innovation Institute of Defense Technology
Academy of Military Sciences PLA, Beijing 100010, China, and also with the scheme was not applicable to the MEC networks. Then, the
College of Communications Engineering, Army Engineering University of PLA, authors in [15] proposed an optimal offloading scheme with
Nanjing 210007, China (e-mail: [email protected]). caching computation results in MEC networks. However, the
N. Zhang is with the Department of Computing Sciences, Texas A&M
University-Corpus Christi, Corpus Christi, TX 78412 USA (e-mail: ning. authors overlooked the huge computation and caching resources
[email protected]). in the centralized cloud computing center.
A. Anpalagan is with the Department of Electrical and Computer En- The existing research on the joint computation offloading and
gineering, Ryerson University, Toronto, ON M5B 2K3, Canada (e-mail:
[email protected]). caching scheme in MEC networks just considered the com-
Digital Object Identifier 10.1109/TVT.2019.2942334 putation offloading between users and APs, and neglected the
0018-9545 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: JOINT MULTI-USER COMPUTATION OFFLOADING AND DATA CACHING FOR HYBRID MOBILE CLOUD/EDGE COMPUTING 11019
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
11020 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 68, NO. 11, NOVEMBER 2019
TABLE I users move very slowly during the data offloading, so gi,m can
MAIN PARAMETERS IN THIS PAPER
be seen as a constant in a time slot and can change over different
time slots (i.e., block fading channel). Then, the corresponding
channel power gain can be given as
Gi,m = gi,m d−α
i,m , (1)
where α is the path loss factor. According to [20], [21], the uplink
data rate of a user that chooses to offload its collecting data to
the AP via a wireless link can be expressed as
Gi,m pi
U
Ri,m = B log2 1 + , i ∈ I, m ∈ M,
j∈I\{i} Gj,m pj + σ
2
(2)
where B, pi and σ 2 are the available spectrum bandwidth, the
uplink transmit power of user i and the noise power, respectively.
Note that multiple users share the same spectrum resource in the
wireless interference model.
Similarly, since AP may send the caching associated data (e.g.,
libraries and databases) required for computing the task to the
corresponding user, this may reduce the heavy traffic burden at
the backhaul link. Denote the downlink transmit power of AP m
as pm , then the achievable transmit rate of user i at the downlink
is given by:
Gi,m pm
D
Ri,m = B log2 1 + , i ∈ I, m ∈ M.
n∈M\{m} Gi,n pn + σ
2
(3)
Note that since the size of input computational data is much
larger than the size of computation results, so the latency of
transmitting computation results is neglected in this work.
C. Computation Model
databases) associated with specific computation tasks. Each AP In the computation model, we consider that multiple tasks
can push its cached data to the mobile users which request the can be computed sequentially [22]. According to [23], the CPU
computation data, while the uncached data have to be delivered computing capacity remains constant within one time slot. Thus,
to the users via the backhaul link from the cloud servers. Our we focus on the widely used task model Hi (Si , Wi ), where
network model can reflect or be applicable to many practical Si and Wi stand for the size of computation input data and the
scenarios, such as autonomous vehicle application, surveillance required CPU cycles to accomplish the task Hi , respectively. In
and security monitoring systems, and the remote healthcare this paper, we consider that the input computation data Si can be
system, where a large number of sensors or cameras have a divided into two parts, namely, the data collected by users and the
mount of data to be transmitted to the MEC servers on the APs corresponding database of computation task, which are denoted
or the cloud servers for further analysis and computation. Mobile as Di and Ui , respectively. Taking positioning on autonomous
user mobility may make the high quality channel state informa- vehicles as an example [24], [25], the data to be dealt with
tion unavailable for achieving reliable data communication and is composed of the sensor data and vehicle-to-infrastructure
improving the offloading performance [18], [19]. However, we data. The sensor data is collected from on-board vehicle sen-
do not consider these issues in this work and will discuss them sors, which includes the information about the environment and
in our future work. The main parameters used in this paper are other neighboring vehicles. The vehicle-to-infrastructure data
summarized in Table I. includes weather conditions, traffic flow, etc., which is cached
in APs or the cloud servers within one time slot.
In general, a computation task can be executed locally, at
B. Communication Model AP, or on the more powerful cloud servers. Let ai,m ∈ {0, 1}
The uplink data rate is first introduced when user offloads denote whether data Di is offloaded to AP m or not, and the
the data collected by itself onto AP. Furthermore, we consider corresponding offloading decision of user i is denoted by a series
that one user only access one AP within a time slot for the data of binary variables, namely, ai = {ai,1 , ai,2 , ..., ai,M }. Simi-
transmission. Let gi,m and di,m denote the coefficient of the larly, we define bi,m ∈ {0, 1} to indicate whether the task Hi
effective channel power gain and the distance from user i to AP is processed on the cloud servers or not, and the corresponding
m, respectively. In this paper, we consider the scenario that the decision is expressed as bi = {bi,1 , bi,2 , ..., bi,M }. Moreover,
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: JOINT MULTI-USER COMPUTATION OFFLOADING AND DATA CACHING FOR HYBRID MOBILE CLOUD/EDGE COMPUTING 11021
since the computation capacity of each MEC server is limited, accomplishing locally can be calculated as
the following constraint must be held,
EiL = ε(fiL )2 Wi , ∀i, (8)
ai,m Wi ≤ Om , m ∈ M, (4)
where ε is the energy parameter, which is set as ε = 10−11 ,
i∈I
according to the work in [27].
where Om denotes the maximum computation capacity of the In the following, we present the processing time and energy
MEC server m. Correspondingly, each user only adopts one consumption by computing the task i at AP or cloud servers.
offloading decision to process the computation task. Thus, the The execution time of the MEC server for calculating task Hi
offloading decisions of user i are constrained by can be given as
(ai,m + bi,m ) ≤ 1, i ∈ I, m ∈ M. (5) Di Di Ui Wi
M
Ti,m = ci,m U + (1 − ci,m ) U
+ M C
+ M
m∈M Ri,m Ri,m r fi
The constraint in (5) implies that only one offloading decision (9a)
can be chosen for each computation task Hi . Di Ui Wi
= U
+ (1 − ci,m ) M C + M , ∀i, m, (9b)
Ri,m r fi
D. Caching Model
Di
The users may request a particular task, which requires the where U
Ri,m
is the uplink transmission time for transmitting the
caching data, such as the data from libraries and databases [26]. data Di between the user i and AP m, and fWMi is the computing
i
For each AP, one can determine whether to cache the data from
delay for executing Hi at AP m. Also, we can calculate the
database, according to the popularity information of the data.
energy of user i for offloading the data Di as,
The data Ui caching placement decision can be denoted by a
binary indicator variable ci,m ∈ {0, 1}. Here, ci,m = 1 means M
Ei,m = pi
Di
, ∀i, m. (10)
that data Ui is cached at the MEC server m; otherwise ci,m = U
Ri,m
0. Therefore, we give ci = {ci,1 , ci,2 , ..., ci,M } as the caching
The cloud servers computing time of task Hi can be expressed
decision profile. However, due to the limited storage capability
as follows,
of AP, not all data from the remote database can be cached at the
same time. Therefore, the sum size of all the cached data cannot Di Di Wi
C
Ti,m = U
+ M C + C , ∀i, m, (11)
exceed the storage capability Cm of the AP m, which can be Ri,m r fi
expressed as
where W fiC
i
stands for the latency for processing the task Hi on
(ai,m Di + ci,m Ui ) ≤ Cm , m ∈ M. (6) the cloud server. Moreover, it is easy to observe that energy
i∈I C
consumption Ei,m of user i for the cloud computing is the same
Moreover, AP can fetch the uncaching associated data from the as the MEC servers.
cloud servers via backhual fiber link, where the cloud servers In order to reduce the time consumption of all the users while
store all the computation database. satisfying the limited battery capacity of users, the computation
capability constraint of each MEC server, and the cache capacity
E. Problem Formulation constraint of each AP, we jointly optimize the data caching and
offloading decisions. The corresponding optimization problem
For each task Hi , it can be computed locally by user i, the
can be formulated as follows,
AP, or cloud servers. According to different computation and
caching strategies, the latency and energy consumption of the min [(1 − ai,m − bi,m )TiL + ai,m Ti,m
M
ai ,bi ,ci
task Hi may be different. The processing time of task Hi by m∈M i∈I
computing locally can be expressed by
+ bi,m Ti,m
C
] (12a)
Ui Ui Ui Wi
Ti = ci,m D + (1 − ci,m )
L
D
+ M C + L (7a) s.t. (4), (5), (6),
Ri,m Ri,m r fi
Ui Ui Wi (1 − ai,m − bi,m )EiL ≤ ρi Eimax (12b)
= D + (1 − ci,m ) M C + L , ∀i, m, (7b)
Ri,m r fi M
ai,m Ei,m ≤ ρi Eimax (12c)
where Ui
D
Ri,m
and Wi
fiL
indicate that the downlink transmission C
bi,m Ei,m ≤ ρi Eimax (12d)
time between user i and AP m, and the computation execution
ai,m , bi,m , ci,m ∈ {0, 1} , ∀i, m, (12e)
time of task Hi completed locally by user i, respectively. In
Ui
addition, rM C is the transmission delay of the associated data where the objective function (12a) is to minimize the total
transmitting from the cloud servers to the AP m, where rM C computing task delay. The constraint (12b) states the energy
is the transmission rate between the cloud servers and AP. We consumption of local processing cannot exceed the residual
assume that the rate rM C is a constant regardless of the number battery capacity of user i, where ρi is a factor denoting the
of APs and users. Then, the energy consumption of task Hi by weight on the remainding energy consumption relative to the
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
11022 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 68, NO. 11, NOVEMBER 2019
total battery capacity Eimax of the user i. The constraints (12c) relaxation constraints, which are obtained by,
and (12d) indicate that the energy computation for the MEC
zi,m ≤ bi,m , i ∈ I, m ∈ M, (14a)
server processing and cloud server computing are limited by the
battery of user, respectively. zi,m ≤ 1 − ci,m , i ∈ I, m ∈ M, (14b)
Note that optimization problem defined in (12) is a non-
zi,m ≥ 0, i ∈ I, m ∈ M, (14c)
linear combinational programming problem involving integer
and binary variables. It is obvious that the constraints (4)–(6) zi,m ≥ bi,m − ci,m , i ∈ I, m ∈ M, (14d)
and (12b)–(12e) are nonlinear and discrete. In addition, due to
In particular, according to [29] and [30], it is not difficult
the coupling between the offloading decision variable and the
to verify that the constraint (13c) is equivalent strictly to the
caching decision variable (i.e., bi,m ci,m ), the objective function
constraints (14a)–(14d).
in problem (12) is discrete and nonconvex. Therefore, it is
However, even after decoupling the bilinear variables, the
challenging and intractable to find the optimal solutions to this
problem (13) remains non-convex due to the binary variables
problem within the polynomial time complexity. In the following
ai,m , bi,m , ci,m , zi,m . To deal with this issue, the binary vari-
subsection, we will show that the nonlinear combinational op-
ables of ai,m , bi,m , ci,m , zi,m are relaxed to the continuous de-
timization problem can be converted into a linear programming
cision variables as 0 ≤ ai,m ≤ 1, 0 ≤ bi,m ≤ 1, 0 ≤ ci,m ≤ 1,
problem by adopting McCormick envelopes and relaxation, and
and 0 ≤ zi,m ≤ 1. The relaxed variables can be interpreted as
propose a method to solve the transformed problem using a
the fraction of the associated computing task data which can be
distributed algorithm.
offloaded to MEC servers or remote cloud servers, or cached in
the MEC servers. Thus, the problem (13) can be rewritten as
III. JOINT COMPUTATION OFFLOADING AND CACHING SCHEME Ui Wi Ui
In this section, we first decouple the product relationship min D
+ L + MC
ai ,bi ,ci ,z Ri,m fi r
m∈M i∈I
bi,m ci,m in problem (12) with the aid of McCormick envelopes.
After the decomposition, we reformulate the optimization prob- Di Wi Ui Wi
+ ai,m U
+ − −
lem (12) by relaxing the binary variables of the offloading Ri,m fiM D
Ri,m fiL
and caching decision, and then propose a distributed algorithm
for solving the transformed problem via consensus ADMM. Di Di Wi Ui
+ bi,m U
+ MC + C − D
Together, the corresponding convergence results and complexity Ri,m r fi Ri,m
are provided. Moreover, a method of recovering the binary
Wi Ui Ui
variables of the offloading and caching decisions from the con- − L − ci,m M C − zi,m M C (15a)
fi r r
tinuous variables is also presented.
s.t. ai,m Wi ≤ Om , m ∈ M, (15b)
A. Problem Reformulation i∈I
Note that the bilinear product bi,m ci,m makes the problem (ai,m + bi,m ) ≤ 1, i ∈ I, m ∈ M, (15c)
(12) intractable. In the following, by means of defining zi,m = m∈M
bi,m (1 − ci,m ), the problem (12) can be transformed as,
(ai,m Di + ci,m Ui ) ≤ Cm , m ∈ M, (15d)
Ui Wi Ui i∈I
min + L + MC
ai ,bi ,ci ,z R D
i,m f i r (1 − ai,m − bi,m )EiL ≤ ρi Eimax , i ∈ I, m ∈ M,
m∈M i∈I
(15e)
Di Wi Ui Wi
+ ai,m U
+ − − M
ai,m Ei,m ≤ ρi Eimax , i ∈ I, m ∈ M, (15f)
Ri,m fiM D
Ri,m fiL
C
bi,m Ei,m ≤ ρi Eimax , i ∈ I, m ∈ M, (15g)
Di Di Wi Ui
+ bi,m U
+ MC + C − D
Ri,m r fi Ri,m zi,m ≤ bi,m , i ∈ I, m ∈ M, (15h)
Wi Ui Ui zi,m ≤ 1 − ci,m , i ∈ I, m ∈ M, (15i)
− L − ci,m M C − zi,m M C (13a)
fi r r
zi,m ≥ 0, i ∈ I, m ∈ M, (15j)
s.t. (4), (5), (6),
zi,m ≤ bi,m − ci,m , i ∈ I, m ∈ M, (15k)
(12b) − (12e),
ai,m , bi,m , ci,m , zi,m ∈ [0, 1], ∀i, m. (15l)
ai,m , bi,m , ci,m ∈ {0, 1} , ∀i, m, (13b)
Then, we will show that problem (15) is convex through
zi,m = bi,m (1 − ci,m ), i ∈ I, m ∈ M. (13c) Proposition 1.
Proposition 1: If problem (15) is feasible, it is a convex
To obtain the desired convex relaxations for constraint (13c), optimization problem with respect to the optimization variables
we use the McCormick relaxation method [28] to get the convex ai , bi , ci , and z.
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: JOINT MULTI-USER COMPUTATION OFFLOADING AND DATA CACHING FOR HYBRID MOBILE CLOUD/EDGE COMPUTING 11023
Proof: With the aid of McCormick envelopes and the relax- variables can be shown as,
ation of the binary variables, the objective function of problem
(15) is a linear combination of variables ai , bi , ci , and z. Then, min gm (e)
it is easy to find that this objective function is convex. Mean- m∈M
⎧ ⎫ where σ m = {σi,k m
}, δ m = {δi,km
}, η m = {ηi,k
m
} are the associ-
⎪
⎪ i∈I i,k Wi ≤ Ok , ∀k
ãm ⎪
⎪
⎪
⎪ ⎪
⎪ ated Lagrangian multipliers with respect to problem (18) and ρ
⎪
⎪ (ã m
+ b̃ m
) ≤ 1, ∀i ⎪
⎪
⎪
⎪ k∈M i,k i,k ⎪
⎪ is a positive penalty parameter, which is a enormously important
⎪
⎪ (ã m
+ ) ≤ ∀k ⎪
⎪
⎪
⎪ ãm i∈I i,k D i c i,k U i C k , ⎪
⎪ value for effecting the property of the iterative method. More-
⎪
⎪ ⎪
⎪
⎪
⎪ (1 − ãi,k − b̃i,k )Ei ≤ ρi Ei , ∀i, k ⎪
m m L max ⎪
⎪
⎪
over, s = {ai , bi , z}. In order to optimize the target variables
⎪
⎪ b̃m ⎪
⎪ conveniently, the linear and quadratic terms of the equality con-
⎪
⎪ m M max
ãi,k Ei,k ≤ ρi Ei , ∀i, k ⎪
⎪
⎨ ⎬ straints are combined through scaling the Lagrange multipliers
χm = max
z̃m b̃i,k Ei,k ≤ ρi Ei , ∀i, k
m C . with respect to (18) and the augmented Lagrangian function (19)
⎪
⎪ ⎪
⎪
⎪
⎪ z̃i,k ≤ b̃i,k , ∀i, k
m m ⎪
⎪ can be rewritten as follows,
⎪
⎪ cm ⎪
⎪
⎪
⎪ ⎪
⎪
⎪
⎪
m
z̃i,k ≤ 1 − ci,m , ∀i, k ⎪
⎪
⎪
⎪ ⎪
⎪ Lρ (e, s, {um , v m , wm }) = gm (e)
⎪
⎪ z̃i,k ≥ 0, ∀i, k
m ⎪
⎪
⎪
⎪ ⎪
⎪ m∈M
⎪
⎪ ⎪
⎪ ρ m
⎪
⎪ z̃i,k ≤ b̃i,k − ci,m , ∀i, k
m m ⎪
⎪ + ãi,k − ai,k + um
i,k
2
⎪
⎩ ⎪
⎭ 2
m∈M k∈M
i∈I
(16) ρ m
+ b̃i,k − bi,k + vi,k
m 2
2
m∈M k∈M
Moreover,
the
objective function (15a) can be expressed as i∈I
min m∈M i∈I f (ai , bi , ci , z). For each station m, the as- ρ m
sociated local function can be formulated as + z̃i,k − zi,k + wi,k
m 2
, (20)
2
m∈M k∈M
i∈I
i∈I f (e) , e ∈ χm ; σm δm ηm
gm =
i,k = ρ , vi,k = ρ , and wi,k = ρ . Moreover,
(17) where um i,k m i,k m i,k
0, otherwise,
u = {ui,k }, v = {vi,k }, and w = {wi,k } are the new dual
m m m m m m
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
11024 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 68, NO. 11, NOVEMBER 2019
in which the dual function d(θ) is expressed by respect to global variables s. Accordingly, the global variables
s can be updated according to the following formulations,
d(θ) = min Lρ (e, s, θ) (22)
e,s
{ai }(t+1) = arg min
where θ = {um , v m , wm }. {ai,k }
The problem (23) can be decomposed into N parallel subprob- {z}(t+1) = arg min
{zi,k }
lems, and each subproblem can be solved separately at each AP ⎡ ⎤
according to the following formulation:
⎢ρ m(t+1) m(t) ⎥
×⎣ z̃i,k − zi,k + wi,k 2 ⎦ .
e(t+1) = arg min 2
m∈M k∈M
i,k ,b̃i,k ,z̃i,k ,ci,m }
{ãm m m
i∈I
⎡ (28)
⎢ ρ m (t) m(t) Since problem (26), (27), (28) are unconstrained quadratic con-
× ⎣gm (e) + ãi,k − ai,k + ui,k 2
2
k∈M
vex problems, we differentiate them with respect to ai , bi , z and
i∈I obtain the following equations,
ρ m (t) m(t) m(t+1)
+ b̃i,k − bi,k + vi,k 2 ρ (ãi,k
m(t)
− ai,k + ui,k ) = 0, ∀i, k,
2 (29)
k∈M m∈M
i∈I
⎤ m(t+1) m(t)
ρ (b̃i,k − bi,k + vi,k ) = 0, ∀i, k, (30)
ρ m (t) m(t) ⎥ m∈M
+ z̃i,k − zi,k + wi,k 2 ⎦ . (24)
2 m(t+1) m(t)
k∈M
i∈I
ρ (z̃i,k − zi,k + wi,k ) = 0, ∀i, k. (31)
m∈M
Accordingly, each station m ∈ M solves the following equiva-
Then the global solution w.r.t. ai , bi , z can be obtained as
lent optimization problem at the (t + 1)-th iteration:
⎡ follows,
1 m(t+1)
⎢ ρ m (t) m(t)
(t+1)
ai,k = (ãi,k
m(t)
+ ui,k ), ∀i, k, (32)
min ⎣gm (e) + ãi,k − ai,k + ui,k 2 M
{ãi,k ,b̃i,k ,z̃i,k ,ci,m }
m m m 2
k∈M
m∈M
i∈I
(t+1) 1 m(t+1) m(t)
ρ m (t) m(t)
bi,k = (b̃i,k + vi,k ), ∀i, k, (33)
+ b̃i,k − bi,k + vi,k 2 M
m∈M
2
k∈M
i∈I (t+1) 1 m(t+1) m(t)
⎤ zi,k = (z̃i,k + wi,k ), ∀i, k. (34)
M
m∈M
ρ m (t) m(t) ⎥
+ z̃i,k − zi,k + wi,k 2 ⎦ , By initializing the Lagrange multipliers of equations (32), (33),
2
k∈M (34) as zero at the t-th iteration, the formulations can be reduced
i∈I
to
s.t. e ∈ χm . (25) 1 m(t+1)
(t+1)
ai,k = ãi,k , ∀i, k, (35)
It is easy to observe that problem (25) is a convex problem M
m∈M
with a quadric objective function and convex constraint. Thus,
(t+1) 1 m(t+1)
the corresponding optimal solution can be found by using the bi,k = b̃i,k , ∀i, k, (36)
primal-dual interior-point algorithm [31] or standard software, M
m∈M
such as CVX, CPLEX.
(t+1) 1 m(t+1)
Step 2. Global variables updating: In this step, we focus on the zi,k = z̃i,k , ∀i, k, (37)
M
global variables updating. Given e(t+1) , we minimize Lρ with m∈M
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: JOINT MULTI-USER COMPUTATION OFFLOADING AND DATA CACHING FOR HYBRID MOBILE CLOUD/EDGE COMPUTING 11025
Algorithm 1: Binary Variables Recovery Algorithm. Algorithm 2: Proposed Distributed ADMM-Based Compu-
1: Set M := ∅; tation Offloading and Data Caching Algorithm.
2: for each k ∈ M do Input: The number of APs M , ρi , Eimax , and α
3: set a∗ĩ,k := 1 with ĩ = arg maxi∈I a∗i,k and a∗i,k := 0 Output: ai , bi , and ci , i ∈ I
for all i ∈ I\{ĩ}; 1: Initialization: Set the stopping criterion values pri
4: set M := M ∪ {k}; and dual ,
If any of the constraints (4)–(6) and (12b)–(12e) does and initialize the feasible set (ai (t) , bi (t) , ci (t) ), i ∈ I,
not satisfy, and the scaling Lagrange multipliers vectors um , v m ,
Then Break and wm , t = 0;
5: end for 2: repeat
6: Output the recovered binary variables a∗i,k , 3: for each AP m, m ∈ M do
∀i ∈ I, k ∈ M. 4: Update local variables ãm(t+1) , b̃m(t+1) , z̃m(t+1) ,
(t+1)
cm
by solving problem (25);
which imply that the global variables ai , bi , z are updated at the 5: end for
(t+1) (t+1)
(t + 1)-th iteration by averaging the total local copies at all the 6: Update global variables ai , bi , and z(t+1)
corresponding stations. using (35), (36), and (37);
Step 3. Lagrange multipliers updating: The Lagrange multi- 7: Update multipliers um(t+1) , v m(t+1) , and wm(t+1)
pliers updating in this step can be represented as follows, using (38), (39), and (40);
(t+1)
8: t = t + 1;
um(t+1) = um(t) + ãm(t+1) − ai , (38) 9: until
(t+1) (t) (t+1) (t)
(t+1) ai − ai 2 ≤ dual , bi − bi 2 ≤ dual ,
v m(t+1) = v m(t) + b̃m(t+1) − bi , (39)
and z(t+1) − z(t) 2 ≤ dual , ∀i.
(t+1) (t+1) (t+1)
wm(t+1) = wm(t) + z̃m(t+1) − z(t+1) . (40) 10: return ai , bi , and ci , i ∈ I to problem
(15);
Since the cloud servers receive all the updating local variables
11: Map the continuous values to the binary values using
from each MEC server, the Lagrange multipliers updating using
Algorithm 1;
equations (38)–(40) would be calculated at the cloud servers.
12: Output the recovered binary variables a∗i , b∗i , and c∗i ,
Step 4. Algorithm Stopping Criterion: The stopping criterion
i ∈ I.
in [33] is adopted. The primal residuals for each MEC server
under the feasible condition must be small, such as
(t+1) the same way, the other three continuous relaxation values are
ãm(t+1) − ai 2 ≤ pri , ∀m, (41)
converted into the binary values.
(t+1)
b̃m(t+1) − bi 2 ≤ pri , ∀m, (42) The details of the proposed distributed ADMM-based com-
putation offloading and data caching algorithm is given in
z̃m(t+1) − z(t+1) 2 ≤ pri , ∀m. (43) Algorithm 2.
In the same spirit, the dual residuals under dual feasible condi-
tion are expressed as follows,
C. Algorithm Convergence
(t+1) (t)
ai − ai 2 ≤ dual , ∀m, (44)
According to Proposition 1, problem (15) is convex. Besides,
(t+1) (t) since problem (18) is the equivalent global consensus version
bi − bi 2 ≤ dual , ∀m, (45)
of problem (15) by introducing local copies of the global
z(t+1) − z(t) 2 ≤ dual , ∀m. (46) variables into each AP and gm (e) is linear with respect to
e, problem (18) is also convex [35]. According to [36], [37],
pri > 0 and dual > 0 express the feasible tolerances for the
primal and dual feasibility conditions, respectively. all the variables and the objective function of problem (18)
Step 5. Binary variables recovery: Due to the continuous are bounded. Then, it can be easily verified that problem (18)
relaxation in Subsection III-A, the obtained continuous values converges to its optimal point. In addition, since the feasible
ai , bi , ci , z must be mapped to the binary values. According to set of local variables of AP m ∈ M and the constraints of
[34], the binary values can be recovered by using the method as problem (18) are affine in respect to the optimization variables
follows, [33], the optimal duality gap of problem (18) is zero. Based on
the appendix of [33], the objective function of problem (18) is
∗
1, if ĩ = arg maxi∈I a∗i,k ; convex, proper and closed, and its corresponding Lagrangian
aĩ,k = (47) function (20) has a saddle point. Therefore, by applying the
0, otherwise.
above updating rules with the ADMM method, the proposed
Here, a∗ĩ,k denotes the recovered binary values. The details of the ADMM algorithm is guaranteed to find the optimal point of
binary variables recovery algorithm are given in Algorithm 1. In problem (15).
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
11026 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 68, NO. 11, NOVEMBER 2019
D. Complexity Analysis
It is worthwhile to compare the complexity of the pro-
posed ADMM-based distributed algorithm with other four al-
gorithms, such as the centralized algorithm, optimal offload-
ing and caching at AP without offloading to cloud servers
(OOCWC) [38], optimal offloading to APs or cloud servers
without caching (OMCWC) [39], and task caching and of-
floading (TCO) [12]. For the centralized algorithm in [40], its
complexity is O(I 3 (M + 1)3 ). Since [38] addresses its corre-
sponding problem using distributed ADMM method in a single-
cell scenario, OOCWC applies the consensus ADMM to solve
the problem of the computation offloading and caching at AP
without offloading to the cloud servers in the multi-cell scenario. Fig. 2. The convergence of the proposed algorithm.
Thus, the complexity of OOCWC is O(I 3 ) at each iteration.
As stated in [39], the complexity of OMCWC at each iteration
is O(I). For TCO, its complexity is O(I 3 ) in each iteration allocated for each user i at AP is 10G cycles/sec [42]. Further-
[40]. For the proposed ADMM-based distributed algorithm, in more, the cloud servers allocating the computational capability
the local variables updating step, the computation complexity is to the each user i is 30G cycles/sec. Each AP has the same
O(I 3 ). Then, for the global variables updating, the computation storage ability to cache the associated input databases, and the
complexity is O(I(M + 1)). Besides, the computation com- storage ability is 10 GB.
plexity of Lagrange multipliers updating is O(I) [35], and the
per-iteration computational complexity of the proposed Algo- A. Comparison With Other Methods
rithm 1 is O(I 3 ) + O(I(M + 1)) + O(I) ≈ O(I 3 ). Therefore,
the major computational complexity for the proposed distributed To evaluate the proposed distributed algorithm, we compare
algorithm is KO(I 3 ), where K denotes the number of iterations our optimal joint offloading and caching strategy with the fol-
to solve problem (18). Clearly, Algorithm 1 has much lower lowing four benchmark strategies, namely:
r Optimal offloading and caching at AP without offloading to
computation complexity compared to the centralized algorithm.
Although the complexity of the proposed algorithm is the same cloud servers (OOCWC): Each user only chooses to offload
or less than the other three algorithms, the performance of the computing task to AP, and AP cache the correspond-
the proposed algorithm is better than the other algorithms as ingly inputting computation data [38].
r Optimal offloading to APs or cloud servers without caching
demonstrated in simulation.
(OMCWC): Each user may offload its computation task
to APs or cloud servers to process these tasks, and the
IV. SIMULATION RESULTS
corresponding computation data is not cached at APs [39].
In this section, the extensive simulation results are presented r Centralized optimal offloading and caching strategy
to verify the performance of the proposed distributed algorithm (CODCS): A centralized data offloading and caching de-
via ADMM. Simulation is performed on Matlab R2018a. In cision is found for all BSs and cloud servers to minimize
this simulation, the scenario that the wireless channels remain the hybrid three-layer network latency.
unchanged during the processing of the mobile users computing r Task caching and offloading (TCO): [12] designs the task
tasks is considered [16]. This is applicable to a relatively static or caching and offloading based on the block coodinate de-
low-speed movement scenario. Consider the scenario consisting scent method, which gives the offloading strategy first and
of five APs, and the radius of each cell is set to 250 m. Moreover, then designs the caching strategy iteratively.
each user is randomly distributed in the coverage area of 5 APs
and it can associate with the corresponding AP through only one
wireless channel. The wireless channel gain Gi,m is modeled B. Performance Comparison
as gi,m d−α
i,m [41], where gi,m can be set as a constant, di,m is Fig. 2 shows the convergence behavior of the proposed dis-
the distance between user and its associating AP, and the path tributed algorithm and OMCWC, OOCWC, CODCS and TCO
loss coefficient is α = 4. The uplink transmission power of each for comparison. In this figure, the number of users is set as
user is set equal to 0.01 W. Besides, the downlink transmission 10. It can be easily seen that the proposed algorithm is greatly
power of each AP is 2 W. The total channel bandwidth is set to improved within the convergence time. Moreover, our proposed
be 20 MHz, and the noise power is σ 2 = 2 × 10−13 W. algorithm can finally converge to the near optimal value, and
For the computation task, we consider the autonomous vehi- the gap between proposed algorithm and CODCS is narrow and
cle applications [25], where the size of computation inputting negligible. Moreover, our proposed algorithm uses the global
data is 5500 KB and the correspondingly required number of variable consensus optimization method which avoids the local
CPU cycles to accomplish the task is 1000 M. The computa- optimum at the expense of consuming more time to explore
tional capability fiL is set equally for each user, e.g., fiL = the near optimal solution. Also we can see that our proposed
512 M cycles/sec, ∀i ∈ I, and the computational capability algorithm converges to a stable point within about 10 iterations.
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: JOINT MULTI-USER COMPUTATION OFFLOADING AND DATA CACHING FOR HYBRID MOBILE CLOUD/EDGE COMPUTING 11027
Fig. 3. Performance comparison of different schemes versus the computation Fig. 5. The total delay versus different size of corresponding database of
requirement of tasks. computation task.
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
11028 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 68, NO. 11, NOVEMBER 2019
V. CONCLUSION
In this paper, we investigated joint optimization of computa-
tion offloading and data caching strategy in a hybrid mobile
cloud/edge computation system, and formulated it as a con-
strained optimization problem, with the objective to minimize
the total users delay while satisfying the constraints of the users’
energy consumption, APs’ storage and computation capabili-
ties. To tackle the bilinear discrete program problem, we first
transformed the original nonconvex optimization problem into
a linear program using the McCormick envelopes and binary
variables relaxation. After that, in order to overcome the dis-
advantages of centralized algorithm, a distributed joint compu-
Fig. 7. The total user delay versus different number of users.
tation offloading with data caching scheme based on ADMM
was proposed. Finally, numerical results have demonstrated that
shown in Fig. 6. In Fig. 6, the number of users is set to 10. the proposed scheme can achieve better performance than the
On the whole, the total users energy consumption increases others. For the future work, we will study the mobility-aware
in the size of the data collected by users of computation task. service migration and computation offloading for vehicular edge
From Fig. 6, the proposed ADMM algorithm achieves lower computing networks.
total users’ energy consumption than all other schemes. In the
low regime of the data collected by users, the performance gap
REFERENCES
between the proposed ADMM algorithm and the other four
schemes is small. The reason is that most users compute their [1] B. Lorenzo, J. Garcia-Rois, X. Li, J. Gonzalez-Castano, and Y. Fang, “A
robust dynamic edge network architecture for the Internet of Things,” IEEE
tasks at APs or cloud servers in the low regime of the data Netw., vol. 32, no. 1, pp. 8–15, Jan. 2018.
collected by users. As the data collected by users increases, [2] R. Yu, G. Xue, V. T. Kilari, and X. Zhang, “The fog of things paradigm:
the incremental energy consumption is mainly from the data Road toward on-demand Internet of Things,” IEEE Commun. Mag., vol. 56,
no. 9, pp. 48–54, Sep. 2018.
collected from users in transmission periods. Moreover, when [3] J. Zheng, Y. Cai, Y. Wu, and X. Shen, “Dynamic computation offloading
the data collected by users is too large, most users may switch to for mobile cloud computing: A stochastic game-theoretic approach,” IEEE
local execution instead of offloading the tasks to APs or cloud Trans. Mobile Comput., vol. 18, no. 4, pp. 771–786, Apr. 2019.
[4] B. Li, Z. Fei, and Y. Zhang, “UAV communications for 5G and beyond:
servers, which would result in more energy consumption. The Recent advances and future trends,” IEEE Internet Things J., vol. 6, no. 2,
simulation results also demonstrate that the proposed ADMM pp. 2241–2263, Apr. 2019.
algorithm can reduce the total user energy consumption by up [5] J. Liao, K. Wong, Y. Zhang, Z. Zheng, and K. Yang, “Coding, multicast, and
cooperation for cache-enabled heterogeneous small cell networks,” IEEE
to 21.73%, 31.94%, 40.85% respectively as compared to the Trans. Wireless Commun., vol. 16, no. 10, pp. 6838–6853, Oct. 2017.
OMCWC, OOCWC and TCO schemes. This is because our [6] X. Li, X. Wang, K. Li, Z. Han, and V. C. M. Leung, “Collaborative
proposed ADMM algorithm takes full advantage of APs or cloud multi-tier caching in heterogeneous networks: Modeling, analysis, and
design,” IEEE Trans. Wireless Commun., vol. 16, no. 10, pp. 6926–6939,
server computation offloading and users have more flexibility for Oct. 2017.
offloading. [7] Q. Li, W. Shi, X. Ge, and Z. Niu, “Cooperative edge caching in software-
In Fig. 7, we compare the performance of different algorithms defined hyper-cellular networks,” IEEE J. Sel. Areas Commun., vol. 35,
no. 11, pp. 2596–2605, Nov. 2017.
with the number of users varying from 2 to 26. Mobile users [8] W. Li, S. M. A. Oteafy, and H. S. Hassanein, “Rate-selective caching
are randomly distributed in the coverage area of 5 APs. It can for adaptive streaming over information-centric networks,” IEEE Trans.
be seen that our proposed scheme outperforms the schemes of Comput., vol. 66, no. 9, pp. 1613–1628, Sep. 2017.
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
YANG et al.: JOINT MULTI-USER COMPUTATION OFFLOADING AND DATA CACHING FOR HYBRID MOBILE CLOUD/EDGE COMPUTING 11029
[9] K. N. Doan, T. Van Nguyen, T. Q. S. Quek, and H. Shin, “Content-aware [33] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed
proactive caching for backhaul offloading in cellular network,” IEEE optimization and statistical learning via the alternating direction method
Trans. Wireless Commun., vol. 17, no. 5, pp. 3128–3140, May 2018. of multipliers,” Found. Trends Mach. Learn., vol. 3, no. 1, pp. 1122, 2011.
[10] X. Yang, J. Zheng, Z. Fei, and B. Li, “Optimal file dissemination and [34] P. Luong, F. Gagnon, C. Despins, and L. Tran, “Optimal joint remote radio
beamforming for cache-enabled C-RANs,” IEEE Access, vol. 6, pp. 6390– head selection and beamforming design for limited fronthaul C-RAN,”
6399, Mar. 2018. IEEE Trans. Signal Process., vol. 65, no. 21, pp. 5605–5620, Nov. 2017.
[11] W. Fan, Y. Liu, B. Tang, F. Wu, and H. Zhang, “TerminalBooster: Collab- [35] L. Majzoobi, F. Lahouti, and V. Shah-Mansouri, “Analysis of distributed
orative computation offloading and data caching via smart basestations,” ADMM algorithm for consensus optimization in presence of node error,”
IEEE Wireless Commun. Lett., vol. 5, no. 6, pp. 612–615, Dec. 2016. IEEE Trans. Signal Process., vol. 67, no. 7, pp. 1774–1784, Apr. 2019.
[12] Y. Hao, M. Chen, L. Hu, M. S. Hossain, and A. Ghoneim, “Energy efficient [36] M. Liu, R. Yu, Y. Teng, V. C. M. Leung, and M. Song, “Computation
task caching and offloading for mobile edge computing,” IEEE Access, vol. offloading and content caching in wireless blockchain networks with
6, pp. 11365–11373, Mar. 2018. mobile edge computing,” IEEE Trans. Veh. Technol., vol. 67, no. 11,
[13] Y. He, N. Zhao, and H. Yin, “Integrated networking, caching, and com- pp. 11008–11021, Nov. 2018.
puting for connected vehicles: A deep reinforcement learning approach,” [37] X. Cao and K. J. R. Liu, “Distributed linearized ADMM for network cost
IEEE Trans. Veh. Technol., vol. 67, no. 1, pp. 44–55, Jan. 2018. minimization,” IEEE Trans. Signal Inf. Process. Over Netw., vol. 4, no. 3,
[14] Q. Chen, F. R. Yu, T. Huang, R. Xie, J. Liu, and Y. Liu, “Joint resource pp. 626–638, Sep. 2018.
allocation for software-defined networking, caching, and computing,” [38] S. Bi and Y. J. Zhang, “Computation rate maximization for wireless
IEEE/ACM Trans. Netw., vol. 26, no. 1, pp. 274–287, Feb. 2018. powered mobile-edge computing with binary computation offloading,”
[15] S. Yu, R. Langar, X. Fu, L. Wang, and Z. Han, “Computation offloading IEEE Trans. Wireless Commun., vol. 17, no. 6, pp. 4177–4190, Jun. 2018.
with data caching enhancement for mobile edge computing,” IEEE Trans. [39] H. Guo and J. Liu, “Collaborative computation offloading for multiaccess
Veh. Technol., vol. 67, no. 11, pp. 11098–11112, Nov. 2018. edge computing over fiberwireless networks,” IEEE Trans. Veh. Technol.,
[16] M. Chen, B. Liang, and M. Dong, “Multi-user multi-task offloading vol. 67, no. 5, pp. 4514–4526, May 2018.
and resource allocation in mobile cloud systems,” IEEE Trans. Wireless [40] A. Tal and A. Nemirovski, Lectures on Modern Convex Optimization:
Commun., vol. 17, no. 10, pp. 6790–6805, Oct. 2018. Analysis, Algorithms, and Engineering Applications. Philadelphia, PA,
[17] J. Ren, G. Yu, Y. Cai, and Y. He, “Latency optimization for resource USA: Soc. Ind. Appl. Math., Aug. 2001.
allocation in mobile-edge computation offloading,” IEEE Trans. Wireless [41] J. Zheng, Y. Cai, Y. Liu, Y. Xu, B. Duan, and X. Shen, “Optimal power
Commun., vol. 17, no. 8, pp. 5506–5519, Aug. 2018. allocation and user scheduling in multicell networks: Base station cooper-
[18] T. Ouyang, Z. Zhou, and X. Chen, “Follow me at the edge: Mobility-aware ation using a game-theoretic approach,” IEEE Trans. Wireless Commun.,
dynamic service placement for mobile edge computing,” IEEE J. Sel. Areas vol. 13, no. 12, pp. 6928–6942, Dec. 2014.
Commun., vol. 36, no. 10, pp. 2333–2345, Oct. 2018. [42] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation
[19] J. Mei, K. Zheng, L. Zhao, Y. Teng, and X. Wang, “A latency and reliability offloading for mobile-edge cloud computing,” IEEE/ACM Trans. Netw.,
guaranteed resource allocation scheme for LTE V2V communication vol. 24, no. 5, pp. 2795–2808, Oct. 2016.
systems,” IEEE Trans. Wireless Commun., vol. 17, no. 6, pp. 3850–3860,
Jun. 2018.
[20] J. Zheng, Y. Wu, N. Zhang, H. Zhou, Y. Cai, and X. Shen, “Optimal
power control in ultra-dense small cell networks: A game-theoretic ap- Xiaolong Yang received the M.S. degree in commu-
proach,” IEEE Trans. Wireless Commun., vol. 16, no. 7, pp. 4139–4150, nication and information systems from HeBei Uni-
Jul. 2017. versity, Baoding, China, in 2016. He is currently
[21] Y. Wu, L. P. Qian, J. Zheng, H. Zhou, and X. S. Shen, “Green-oriented traf- working toward the Ph.D. degree with the School
fic offloading through dual connectivity in future heterogeneous small cell of Information and Electronics, Beijing Institute of
networks,” IEEE Commun. Mag., vol. 56, no. 5, pp. 140–147, May 2018. Technology, Beijing, China. Since December 2018,
[22] T. Frederic, D. Yanakiev, and J. Berg, “Control method for autonomous he has been a Visiting Student with Singapore Uni-
vehicles,” U.S. Patent Application No. 15/697,368, Sep. 2017. versity of Technology and Design, Singapore. His
[23] F. Wang, J. Xu, X. Wang, and S. Cui, “Joint offloading and computing research interests include wireless caching, mobile
optimization in wireless powered mobile-edge computing systems,” IEEE edge computing, and resource allocation.
Trans. Wireless Commun., vol. 17, no. 3, pp. 1784–1797, Mar. 2018.
[24] M. Chen, Y. Qian, Y. Hao, Y. Li, and J. Song, “Data-driven computing and
caching in 5G networks: Architecture and delay analysis,” IEEE Wireless Zesong Fei received the B.Eng. and Ph.D. degrees in
Commun., vol. 25, no. 1, pp. 70–75, Feb. 2018. electronic engineering from the Beijing Institute of
[25] S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough, and A. Technology (BIT), Beijing, China, in 1999 and 2004,
Mouzakitis, “A survey of the state-of-the-art localization techniques respectively. He is currently a Professor with the Re-
and their potentials for autonomous vehicle applications,” IEEE Internet search Institute of Communication Technology, BIT.
Things J., vol. 5, no. 2, pp. 829–846, Apr. 2018. From 2012 to 2013, he was a Visiting Scholar with
[26] W. Wen, Y. Cui, F. Zheng, S. Jin, and Y. Jiang, “Enhancing performance the University of Hong Kong, Hong Kong. His current
of random caching in large-scale heterogeneous wireless networks with research interests focus on mobile edge computing,
random discontinuous transmission,” IEEE Trans. Commun., vol. 66, physical-layer security, and error control coding.
no. 12, pp. 6287–6303, Dec. 2018.
[27] A. P. Miettinen, and J. K. Nurminen, “Energy efficiency of mobile clients in
cloud computing,” in Proc. 2nd USENIX Conf. Hot Topics Cloud Comput.,
Boston, CA, USA, Jun. 2010, pp. 1–4. Jianchao Zheng received the B.S. degree in com-
[28] P. M. Castro, “Tightening piecewise McCormick relaxations for bilinear munications engineering and the Ph.D. degree in
problems,” Comput. Chem. Eng., vol. 72, pp. 300–311, 2015. communications and information systems from the
[29] Y. Wang, X. Tao, X. Zhang, and G. Mao, “Joint caching placement and College of Communications Engineering, PLA Uni-
user association for minimizing user download delay,” IEEE Access, vol. 4, versity of Science and Technology, Nanjing, China, in
pp. 8625–8633, 2016. 2010 and 2016, respectively. From 2015 to 2016, he
[30] H. Nagarajan, M. Lu, E. Yamangil, and R. Bent, “Tightening McCormick was a Visiting Scholar with the Broad-band Commu-
relaxations for nonlinear programs via dynamic multivariate partitioning,” nications Research Group, Department of Electrical
in Proc. Int. Conf. Princ. Pract. Constraint Program., Toulouse, France, and Computer Engineering, University of Waterloo,
Sep. 2016, pp. 369–387. Waterloo, ON, Canada. He is currently an Assistant
[31] C. Chi, W. Li, and C. Lin, Convex Optimization for Signal Processing and Researcher with the National Innovation Institute of
Communications: From Fundamentals to Applications. Boca Raton, FL, Defense Technology, Academy of Military Sciences of PLA, Beijing, China.
USA: CRC Press, Feb. 2017. He has authored or coauthored several papers in international conferences and
[32] Z. Wu and Z. Fei, “Precoder design in downlink CoMP-JT MIMO network reputed journals in his research area. His research interests focus on interference
via WMMSE and asynchronous ADMM,” Sci. China Inf. Sci., vol. 61, no. mitigation techniques, green communications and computing networks, game
8, pp. 082306:1–082306:13, Aug. 2018. theory, learning theory, and optimization techniques.
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.
11030 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 68, NO. 11, NOVEMBER 2019
Ning Zhang received the Ph.D. degree from the Uni- Alagan Anpalagan received the B.A.Sc., M.A.Sc.,
versity of Waterloo, Waterloo, ON, Canada, in 2015. and Ph.D. degrees, all in electrical engineering from
After that, he was a Postdoc Research Fellow with the University of Toronto, Toronto, ON, Canada. He
the University of Waterloo and University of Toronto, was with the ELCE Department, Ryerson Univer-
Canada, respectively. He is currently an Assistant Pro- sity, Canada, in 2001, and was promoted to Full
fessor with Texas A&M University-Corpus Christi, Professor in 2010. He has served in many depart-
USA. His current research interests include next- ment in administrative positions such as Associate
generation mobile networks, mobile edge computing, Chair, Program Director for electrical engineering,
and security. He serves/served as an Associate Editor and Graduate Program Director. During his sabbati-
for the IEEE TRANSACTIONS ON COGNITIVE COMMU- cal, he was a Visiting Professor with the Asian Insti-
NICATIONS AND NETWORKING, IEEE INTERNET OF tute of Technology, and Visiting Researcher at Kyoto
THING JOURNAL, IEEE ACCESS, and IET Communications, and a Guest Editor of University. His industrial experience include working for three years with Bell
several international journals, such as IEEE WIRELESS COMMUNICATIONS and Mobility, Nortel Networks, and IBM. He directs a research group working on
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING. radio resource management and radio access and networking areas within the
He has also served as the workshop Chair for MobiEdge’18 (in conjunction WINCORE Laboratory. He has coauthored four edited books and two books
with IEEE WiMob 2018) and CoopEdge’18 (in conjunction with IEEE EDGE in wireless communication and networking areas. He served as an Editor for
2018), and 5G and NTN’19 (in conjunction with IEEE EDGE 2019). He was the IEEE COMMUNICATIONS SURVEYS AND TUTORIALS (2012–2014), IEEE
the recipient of several best paper awards from IEEE Globecom in 2014, IEEE COMMUNICATIONS LETTERS (2010–2013), and EURASIP Journal of Wireless
WCSP in 2015, Journal of Communications and Information Networks in 2018, Communications and Networking (2004–2009). He has also served as a Guest
IEEE ICC, IEEE ICCC, and IEEE TECHNICAL COMMITTEE ON TRANSMISSION Editor for six Special Issues published in IEEE, IET, and ACM. He served as TPC
ACCESS AND OPTICAL SYSTEMS in 2019, respectively. Co-Chair, IEEE VTC Fall 2017, TPC Co-Chair, IEEE INFOCOM16: Workshop
on Green and Sustainable Networking and Computing, IEEE Globecom15: SAC
Green Communication and Computing, IEEE PIMRC11: Cognitive Radio and
Spectrum Management. He has served as a Vice Chair, IEEE SIG on Green
and Sustainable Networking and Computing with Cognition and Cooperation
(2015–2018), IEEE Canada Central Area Chair (2012–2014), IEEE Toronto
Section Chair (2006–2007), ComSoc Toronto Chapter Chair (2004–2005), and
IEEE Canada Professional Activities Committee Chair (2009–2011). He was
the recipient of the IEEE Canada J.M. Ham Outstanding Engineering Educator
Award (2018), YSGS Outstanding Contribution to Graduate Education Award
(2017), Deans Teaching Award (2011), Faculty Scholastic, Research, and Cre-
ativity Award thrice from the Ryerson University. He was also the recipient
of IEEE M.B. Broughton Central Canada Service Award (2016), Exemplary
Editor Award from IEEE ComSoc (2013), Editor-in-Chief Top10 Choice Award
in Transactions on Emerging Telecommunications Technology (2012), and a
co-author of a paper that received IEEE SPS Young Author Best Paper Award
(2015). He is a Registered Professional Engineer in the province of Ontario,
Canada, Fellow of the Institution of Engineering and Technology, and Fellow
of the Engineering Institute of Canada.
Authorized licensed use limited to: University of Calgary. Downloaded on March 05,2024 at 17:19:57 UTC from IEEE Xplore. Restrictions apply.