Mobile Edge Computing A Survey On Architecture and Computation Offloading
Mobile Edge Computing A Survey On Architecture and Computation Offloading
Abstract—Technological evolution of mobile user equip- huge processing in a short time. Moreover, high battery con-
ment (UEs), such as smartphones or laptops, goes hand-in-hand sumption still poses a significant obstacle restricting the users
with evolution of new mobile applications. However, running to fully enjoy highly demanding applications on their own
computationally demanding applications at the UEs is con-
strained by limited battery capacity and energy consumption devices. This motivates development of mobile cloud com-
of the UEs. A suitable solution extending the battery life-time of puting (MCC) concept allowing cloud computing for mobile
the UEs is to offload the applications demanding huge process- users [1]. In the MCC, a user equipment (UE) may exploit
ing to a conventional centralized cloud. Nevertheless, this option computing and storage resources of powerful distant cen-
introduces significant execution delay consisting of delivery of tralized clouds (CC), which are accessible through a core
the offloaded applications to the cloud and back plus time of the
computation at the cloud. Such a delay is inconvenient and makes network (CN) of a mobile operator and the Internet. The MCC
the offloading unsuitable for real-time applications. To cope with brings several advantages [2]; 1) extending battery lifetime
the delay problem, a new emerging concept, known as mobile by offloading energy consuming computations of the applica-
edge computing (MEC), has been introduced. The MEC brings tions to the cloud, 2) enabling sophisticated applications to the
computation and storage resources to the edge of mobile network mobile users, and 3) providing higher data storage capabilities
enabling it to run the highly demanding applications at the UE
while meeting strict delay requirements. The MEC computing to the users. Nevertheless, the MCC also imposes huge addi-
resources can be exploited also by operators and third parties tional load both on radio and backhaul of mobile networks and
for specific purposes. In this paper, we first describe major use introduces high latency since data is sent to powerful farm of
cases and reference scenarios where the MEC is applicable. After servers that are, in terms of network topology, far away from
that we survey existing concepts integrating MEC functionali- the users.
ties to the mobile networks and discuss current advancement in
standardization of the MEC. The core of this survey is, then, To address the problem of a long latency, the cloud ser-
focused on user-oriented use case in the MEC, i.e., computation vices should be moved to a proximity of the UEs, i.e., to the
offloading. In this regard, we divide the research on computation edge of mobile network as considered in newly emerged edge
offloading to three key areas: 1) decision on computation offload- computing paradigm. The edge computing can be understood
ing; 2) allocation of computing resource within the MEC; and as a specific case of the MCC. Nevertheless, in the conven-
3) mobility management. Finally, we highlight lessons learned
in area of the MEC and we discuss open research challenges tional MCC, the cloud services are accessed via the Internet
yet to be addressed in order to fully enjoy potentials offered connection [3] while in the case of the edge computing, the
by the MEC. computing/storage resources are supposed to be in proxim-
Index Terms—Mobile edge computing, mobile network ity of the UEs (in sense of network topology). Hence, the
architecture, computation offloading, allocation of computing MEC can offer significantly lower latencies and jitter when
resources, mobility management, standardization, use-cases. compared to the MCC. Moreover, while the MCC is fully
centralized approach with farms of computers usually placed
I. I NTRODUCTION at one or few locations, the edge computing is supposed to
be deployed in fully distributed manner. On the other hand,
HE USERS’ requirements on data rates and quality of
T service (QoS) are exponentially increasing. Moreover,
technological evolution of smartphones, laptops and tablets
the edge computing provides only limited computational and
storage resources with respect to the MCC. A high level com-
parison of key technical aspects of the MCC and the edge
enables to emerge new high demanding services and appli- computing is outlined in Table I.
cations. Although new mobile devices are more and more The first edge computing concept bringing the com-
powerful in terms of central processing unit (CPU), even putation/storage closer to the UEs, proposed in 2009, is
these may not be able to handle the applications requiring cloudlet [4]. The idea behind the cloudlet is to place com-
Manuscript received October 28, 2016; revised February 17, 2017; accepted puters with high computation power at strategic locations in
March 10, 2017. Date of publication March 15, 2017; date of current ver- order to provide both computation resources and storage for
sion August 21, 2017. This work was supported by the Czech Technical the UEs in vicinity. The cloudlet concept of the computing
University in Prague under Grant SGS17/184/OHK3/3T/13. (Corresponding
author: Pavel Mach.) “hotspots” is similar to WiFi hotspots scenario, but instead
The authors are with the Department of Telecommunication Engineering, of Internet connectivity the cloudlet enables cloud services
Faculty of Electrical Engineering, Czech Technical University in to the mobile users. The fact that cloudlets are supposed to
Prague, 166 27 Prague, Czech Republic (e-mail: [email protected];
[email protected]). be mostly accessed by the mobile UEs through WiFi connec-
Digital Object Identifier 10.1109/COMST.2017.2682318 tion is seen as a disadvantage since the UEs have to switch
1553-877X c 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1629
TABLE I
H IGH L EVEL C OMPARISON OF MCC AND E DGE C OMPUTING C ONCEPTS Remote Heads (RRHs) to centralized baseband units (BBUs).
The BBU’s computation power is, then, pooled together into
virtualized resources that are able to serve tens, hundreds or
even thousands of RRHs. Although the computation power of
this virtualized BBU pool is exploited primarily for a cen-
tralized control and baseband processing it may also be used
for the computation offloading to the edge of the network
(see [23]).
Another concept integrating the edge computing into the
mobile network architecture is developed by newly created
(2014) industry specification group (ISG) within European
between the mobile network and WiFi whenever the cloudlet Telecommunications Standards Institute (ETSI) [24]. The solu-
services are exploited [2]. Moreover, QoS (Quality of Service) tion outlined by ETSI is known as Mobile Edge Computing
of the mobile UEs is hard to fulfill similarly as in case of (MEC). The standardization efforts relating the MEC are
the MCC, since the cloudlets are not an inherent part of driven by prominent mobile operators (e.g., DOCOMO,
the mobile network and coverage of WiFi is only local with Vodafone, TELECOM Italia) and manufactures (e.g., IBM,
limited support of mobility. Nokia, Huawei, Intel). The main purpose of ISG MEC group
The other option enabling cloud computing at the edge is to is to enable an efficient and seamless integration of the cloud
perform computing directly at the UEs through ad-hoc cloud computing functionalities into the mobile network, and to help
allowing several UEs in proximity to combine their compu- developing favorable conditions for all stakeholders (mobile
tation power and, thus, process high demanding applications operators, service providers, vendors, and users).
locally [5]–[14]. To facilitate the ad-hoc cloud, several critical Several surveys on cloud computing have been published so
challenges need to be addressed: 1) finding proper computing far. Khan et al. [3] survey MCC application models and high-
UEs in proximity while guaranteeing that processed data will light their advantages and shortcomings. In [25], a problem
be delivered back to the source UE, 2) coordination among of a heterogeneity in the MCC is tackled. The heterogene-
the computing UEs has to be enabled despite the fact that ity is understood as a variability of mobile devices, different
there are no control channels to facilitate reliable computing, cloud vendors providing different services, infrastructures,
3) the computing UEs has to be motivated to provide their platforms, and various communication medium and technolo-
computing power to other devices given the battery consump- gies. The paper identifies how this heterogeneity impacts the
tion and additional data transmission constraints, 4) security MCC and discusses related challenges. Wen et al. [26] sur-
and privacy issues. vey existing efforts on Cloud Mobile Media, which provides
A more general concept of the edge computing, when com- rich multimedia services over the Internet and mobile wire-
pared to cloudlets and ad-hoc clouds, is known as a fog less networks. All above-mentioned papers focus, in general,
computing. The fog computing paradigm (shortly often abbre- on the MCC where the cloud is not allocated specifically at
viated as Fog in literature) has been introduced in 2012 by the edge of mobile network, but it is accessed through the
Cisco to enable a processing of the applications on billions of Internet. Due to a wide potential of the MEC, there is a lot
connected devices at the edge of network [15]. Consequently, of effort both in industry and academia focusing on the MEC
the fog computing may be considered as one of key enablers in particular. Despite this fact, there is just one survey focus-
of Internet of Things (IoT) and big data applications [16] as ing primarily on the MEC [27] that, however, only briefly
it offers: 1) low latency and location awareness due to prox- surveys several research works dealing with the MEC and
imity of the computing devices to the edge of the network, presents taxonomy of the MEC by describing key attributes.
2) wide-spread geographical distribution when compared to Furthermore, Roman et al. [28] extensively surveys security
the CC; 3) interconnection of very large number of nodes issues for various edge computing concepts. On top of that,
(e.g., wireless sensors), and 4) support of streaming and Luon et al. [29] dedicate one chapter to the edge comput-
real time applications [15]. Moreover, the characteristics of ing, where applications of economic and pricing models are
the fog computing can be exploited in many other applica- considered for resource management in the edge computing.
tions and scenarios such as smart grids, connected vehicles In contrast to the above-mentioned surveys, we describe
for Intelligent Transport Systems (ITS) or wireless sensor key use cases and scenarios for the MEC (Section II). Then,
networks [17]–[20]. we survey existing MEC concepts proposed in the literature
From the mobile users’ point of view, the most notable integrating the MEC functionalities into the mobile networks
drawback of all above-mentioned edge computing concepts and we discuss standardization of the MEC (Section III).
is that QoS and QoE (Quality of Experience) for users can be After that, the core part of the paper is focused on techni-
hardly guaranteed, since the computing is not integrated into cal works dealing with computation offloading to the MEC.
an architecture of the mobile network. One concept integrat- On one hand, the computation offloading can be seen as a
ing the cloud capabilities into the mobile network is Cloud key use case from the user perspective as it enables run-
Radio Access Network (C-RAN) [21]. The C-RAN exploits ning new sophisticated applications at the UE while reducing
the idea of distributed protocol stack [22], where some lay- its energy consumption (see [30]–[36] where computation
ers of the protocol stack are moved from distributed Radio offloading to distant CC is assumed). On the other hand,
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1630 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
the computation offloading brings several challenges, such II. U SE C ASES AND S ERVICE S CENARIOS
as selection of proper application and programming models, The MEC brings many advantages to all stakeholders, such
accurate estimation of energy consumption, efficient man- as mobile operators, service providers or users. As suggested
agement of simultaneous offloading by multiple users, or in [38] and [39], three main use case categories, depending
virtual machine (VM) migration [37]. In this respect, we on the subject to which they are profitable to, can be distin-
overview several general principles related to the computa- guished for the MEC (see Fig. 1). The next subsections discuss
tion offloading, such as offloading classification (full, partial individual use case categories and pinpoint several key service
offloading), factors influencing the offloading itself, and man- scenarios and applications.
agement of the offloading in practice (Section IV). Afterwards,
we sort the efforts within research community addressing
following key challenges regarding computation offloading A. Consumer-Oriented Services
into the MEC: The first use case category is consumer-oriented and, hence,
• A decision on the computation offloading to the MEC should be beneficial directly to the end-users. In general, the
with the purpose to determine whether the offloading is users profit from the MEC mainly by means of the com-
profitable for the UE in terms of energy consumption putation offloading, which enables running new emerging
and/or execution delay (Section V). applications at the UEs. One of the applications benefiting
• An efficient allocation of the computing resources from the computation offloading is a Web accelerated browser,
within the MEC if the computation is offloaded in order to where most of the browsing functions (Web contents evalua-
minimize execution delay and balance load of both com- tion, optimized transmission, etc.) are offloaded to the MEC;
puting resources and communication links (Section VI). see experimental results on offloading of Web accelerated
• Mobility management for the applications offloaded browser to the MEC in [40]. Moreover, face/speech recog-
to the MEC guaranteeing service continuity if the nition or image/video editing are also suitable for the MEC as
UEs exploiting the MEC roams throughout the network these require large amount of computation and storage [41].
(Section VII). Besides, the computation offloading to the MEC can be
Moreover, we summarize the lessons learned from state exploited by the applications based on augmented, assisted
of the art focused on computation offloading to the MEC or virtual reality. These applications derive additional infor-
(Section VIII) and outline several open challenges, which need mation about users’ neighborhood by performing an analysis
to be addressed to make the MEC beneficial for all stakehold- of their surroundings (e.g., visiting tourists may find points of
ers (Section IX). Finally, we summarize general outcomes and interest in his/her proximity). This may require fast responses,
draw conclusions (Section X). and/or significant amount of computing resources not available
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1631
at the UE. An applicability of the MEC for augmented real- the mobile edge. This way, the MEC application can store the
ity is shown in [42]. The authors demonstrate on a real MEC most popular content used in its geographical area. If the con-
testbed that the reduction of latency up to 88% and energy tent is requested by the users, it does not have to be transfered
consumption of the UE up to 93% can be accomplished by over the backhaul network.
the computation offloading to the MEC. Besides alleviation and optimization of the backhaul
On top of that, the users running low latency applications, network, the MEC can also help in radio network optimiza-
such as online gaming or remote desktop, may profit from tion. For example, gathering related information from the UEs
the MEC in proximity. In this case a new instance of a spe- and processing these at the edge will result in more effi-
cific application is initiated at an appropriate mobile edge cient scheduling. In addition, the MEC can also be used for
host to reduce the latency and resources requirements of the mobile video delivery optimization using throughput guidance
application at the UE. for TCP (Transmission Control Protocols). The TCP has an
inherent difficulty to adapt to rapidly varying condition on
B. Operator and Third Party Services radio channel resulting in an inefficient use of the resources.
To deal with this problem, the analytic MEC application can
The second use case category is represented by the services
provide a real-time indication on an estimated throughput to
from which operators and third parties can benefit. An example
a backend video server in order to match the application-level
of the use case profitable for the operator or third party is a
coding to the estimated throughput.
gathering of a huge amount of data from the users or sensors.
Such data is first pre-processed and analyzed at the MEC.
The pre-processed data is, then, sent to distant central servers III. MEC A RCHITECTURE AND S TANDARDIZATION
for further analysis. This could be exploited for safety and This section introduces and compares several concepts for
security purposes, such as monitoring of an area (e.g., car the computation at the edge integrated to the mobile network.
park monitoring). First, we overview various MEC solutions proposed in the
Another use case is to exploit the MEC for IoT (Internet literature that enable to bring computation close to the UEs.
of Thing) purposes [43]–[45]. Basically, IoT devices are con- Secondly, we describe the effort done within ETSI standard-
nected through various radio technologies (e.g., 3G, LTE, ization organization regarding the MEC. Finally, we compare
WiFi, etc.) using diverse communication protocols. Hence, individual existing MEC concepts (proposed in both literature
there is a need for low latency aggregation point to handle and ETSI) from several perspectives, such as MEC control or
various protocols, distribution of messages and for processing. location of the computation/storage resources.
This can be enabled by the MEC acting as an IoT gate-
way, which purpose is to aggregate and deliver IoT services A. Overview of the MEC Concept
into highly distributed mobile base stations in order to enable
In recent years, several MEC concepts with purpose to
applications responding in real time.
smoothly integrate cloud capabilities into the mobile network
The MEC can be also exploited for ITS to extend the
architecture have been proposed in the literature. This sec-
connected car cloud into the mobile network. Hence, road-
tion briefly introduces fundamental principles of small cell
side applications running directly at the MEC can receive
cloud (SCC), mobile micro cloud (MMC), fast moving
local messages directly from applications in the vehicles and
personal cloud, follow me cloud (FMC), and CONCERT.
roadside sensors, analyze them and broadcast warnings (e.g.,
Moreover, the section shows enhancements/modifications to
an accident) to nearby vehicles with very low latency. The
the network architecture necessary for implementation of each
exploitation of the MEC for car-to-car and car-to-infrastructure
MEC concept.
communications was demonstrated by Nokia and its partners
1) Small Cell Cloud (SCC): The basic idea of the
in an operator’s LTE network just recently in 2016 [46], [47].
SCC, firstly introduced in 2012 by the European project
TROPIC [48], [53], is to enhance small cells (SCeNBs), like
C. Network Performance and QoE Improvement Services microcells, picocells or femtocells, by an additional com-
The third category of use cases are those optimizing network putation and storage capabilities. The similar idea is later
performance and/or improving QoE. One such use case is to on addressed in SESAME project as well, where the cloud-
enable coordination between radio and backhaul networks. enabled SCeNBs supports the edge computing [49], [50]. The
So far, if the capacity of either backhaul or radio link is cloud-enhanced SCeNBs can pool their computation power
degraded, the overall network performance is negatively influ- exploiting network function virtualization (NFV) [51], [52]
enced as well, since the other part of the network (either radio paradigm. Because a high number of the SCeNBs is supposed
or backhaul, respectively) is not aware of the degradation. to be deployed in future mobile networks, the SCC can pro-
In this respect, an analytic application exploiting the MEC vide enough computation power for the UEs, especially for
can provide real-time information on traffic requirements of services/applications having stringent requirements on latency
the radio/backhaul network. Then, an optimization applica- (the examples of such applications are listed in Section II-A).
tion, running on the MEC, reshapes the traffic per application In order to fully and smoothly integrate the SCC concept
or re-routes traffic as required. into the mobile network architecture, a new entity, denoted
Another way to improve performance of the network is to as a small cell manager (SCM), is introduced to control the
alleviate congested backhaul links by local content caching at SCC [53]. The SCM is in charge of the management of the
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1632 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Fig. 2. SCC architecture (MME - Mobility Management Entity, HSS - Home Subscriber Server, S-GW - Serving Gateway, P-GW - Packet Gateway).
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1633
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1634 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1635
TABLE II
C OMPARISON OF E XISTING MEC C ONCEPTS
C. Summary
This section mutually compares the MEC concepts proposed
in literature with the vision of the MEC developed under
ETSI. There are two common trends followed by individ- Fig. 8. Possible outcomes of computation offloading decision.
ual MEC solutions that bring cloud to the edge of mobile
network. The first trend is based on virtualization techniques
exploiting NFVs principles. The network virtualization is a are also many options where to place MEC servers offering
necessity in order to flexibly manage virtualized resources computation/storage resources to the UEs. The most proba-
provided by the MEC. The second trend is a decoupling ble course of action is that the MEC servers will be deployed
the control and data planes by taking advantage of SDN everywhere in the network to guarantee high scalability of the
paradigm, which allows a dynamic adaptation of the network computation/storage resources. The comparison of all existing
to changing traffic patterns and users requirements. The use MEC concepts is shown in Table II.
of SDN for the MEC is also in line with current trends
in mobile networks [69]–[71]. Regarding control/signaling,
the MMC and MobiScud assume fully decentralize approach IV. I NTRODUCTION TO C OMPUTATION O FFLOADING
while the SCC, FMC, and CONCERT adopt either fully cen- From the user perspective, a critical use case regarding the
tralized control or hierarchical control for better scalability and MEC is a computation offloading as this can save energy
flexibility. and/or speed up the process of computation. In general, a cru-
If we compare individual MEC concepts in terms of com- cial part regarding computation offloading is to decide whether
putation/storage resources deployment, the obvious effort is to offload or not. In the former case, also a question is how
to fully distribute these resources within the network. Still, much and what should be offloaded [41]. Basically, a decision
each MEC concept differs in the location, where the compu- on computation offloading may result in:
tation/storage resources are physically located. While the SCC, • Local execution - The whole computation is done locally
MMC and MobiScud assume to place the computation close at the UE (see Fig. 8). The offloading to the MEC is
to the UEs within RAN, the FMC solution considers integra- not performed, for example, due to unavailability of the
tion of the DCs farther away, for example, in a distributed CN. MEC computation resources or if the offloading simply
On top of that, CONCERT distributes the computation/storage does not pay off.
resources throughout the network in a hierarchical manner so • Full offloading - The whole computation is offloaded and
that a low demanding computation application are handled processed by the MEC.
locally and high demanding applications are relegated either • Partial offloading - A part of the computation is processed
to regional or central servers. Concerning ETSI MEC, there locally while the rest is offloaded to the MEC.
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1636 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1637
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1638 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Fig. 13. The example of computation offloading decision based on energy consumption while satisfying execution delay constraint.
UE spends certain amount of energy in order to: 1) trans- constraints. The UEs that are not allowed to offload the com-
mit offloaded data for computation to the MEC (Eot ) and putation make either local computation or stay idle. It is shown
2) receive results of the computation from the MEC (Eor ). The the offline strategy notably outperforms the online strategies
simple example of the computation offloading decision primar- in terms of the energy saving (by roughly 50%). In addition,
ily based on the energy consumption is shown in Fig. 13. In the energy consumed by individual UEs strongly depends on
the given example, the UE1 decides to perform the compu- requirements of other UEs application.
tation locally since the energy spent by the local execution Another offloading decision strategy for the multi-UEs case
(El ) is significantly lower than the energy required for trans- minimizing the energy consumption at the UEs while satisfy-
mission/reception of the offloaded data (E0 ). Contrary, the ing the maximum allowed execution delay is proposed in [81].
UE2 offloads data to the MEC as the energy required by the A decision on the computation offloading is done periodically
computation offloading is significantly lower than the energy in each time slot, during which all the UEs are divided into two
spent by the local computation. Although the overall execution groups. While the UEs in the first group are allowed to offload
delay would be lower if the UE1 offloads computation to the computation to the MEC, the UEs in the second group have to
MEC and also if the UE2 performs the local execution, the perform computation locally due to unavailable computation
delay is still below maximum allowed execution delay con- resources at the MEC (note that in the paper, the computa-
straint (i.e., Dl < Dmax ). Note that if only the execution delay tion is done at the serving SCeNB). The UEs are sorted to
would be considered for the offloading decision (as considered the groups according to the length of queue, that is, according
in Section V-A3), both UEs would unnecessarily spent more to the amount of data they need to process. After the UEs
energy. are admitted to offload the computation, joint allocation of
The computation offloading decision minimizing the energy the communication and computation resources is performed
consumption at the UE while satisfying the execution delay by finding optimal transmission power of the UEs and allo-
of the application is proposed in [78]. The optimization cation of the SCeNB’s computing resources to all individual
problem is formulated as a constrained Markov decision pro- UEs. The performance of the proposal is evaluated in terms of
cess (CMDP). To solve the optimization problem, two resource an average queue length depending on intensity of data arrival
allocation strategies are introduced. The first strategy is based and a number of antennas used at the UEs and the SCeNB.
on an online learning, where the network adapts dynamically It is shown that the more antennas is used, the less transmis-
with respect to the application running at the UE. The second sion power at the UEs is needed while still ensuring the delay
strategy is pre-calculated offline strategy, which takes advan- constraint of the offloaded computation.
tage of a certain level of knowledge regarding the application The main weak point of [81] is that it assumes only a single
(such as arrival rates measured in packets per slot, radio chan- SCeNB and, consequently, there is no interference among the
nel condition, etc.). The numerical experiments show that the UEs connected to various SCeNBs. Hence, the work in [81]
pre-calculated offline strategy is able to outperform the online is extended in [82] to the multi-cell scenario with N SCeNBs
strategy by up to 50% for low and medium arrival rates (loads). to reflect the real network deployment. Since the formulated
Since the offline resource allocation strategy proposed in [78] optimization problem in [81] is no longer convex, the authors
shows its merit, the authors devise two addition dynamic propose a distributed iterative algorithm exploiting Successive
offline strategies for the offloading [79]: deterministic offline Convex Approximation (SCA) converging to a local optimal
strategy and randomized offline strategy. It is demonstrated solution. The numerical results demonstrate that the proposed
that both offloading offline strategies can lead to significant joint optimization of radio and computational resources signifi-
energy savings comparing to the case when the computing is cantly outperforms methods optimizing radio and computation
done solely at the UE (energy savings up to 78%) or solely at separately. Moreover, it is shown that the applications with
the MEC (up to 15%). fewer amount of data to be offloaded and, at the same time,
A further extension of [79] from a single-UE to a multi- requiring high number of CPU cycles for processing are more
UEs scenario is considered in [80]. The main objective is to suitable for the computation offloading. The reason is that the
jointly optimize scheduling and computation offloading strat- energy spent by the transmission/reception of the offloaded
egy for each UE in order to guarantee QoE, fairness between data to the MEC is significantly lower than the energy sav-
the UEs, low energy consumption, and average queuing/delay ings at the UE due to the computation offloading. The work
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1639
in [82] is further extended in [83] by a consideration of multi- respect to [85] is that Chen et al. [86] assume the compu-
clouds that are associated to individual SCeNBs. The results tation can be offloaded also to the remote centralized cloud
show that with an increasing number of the SCeNBs (i.e., with (CC), if computation resources of the MEC are not sufficient.
increasing number of clouds), the energy consumption of the The computation offloading decision is done in a sequential
UE proportionally decreases. manner. In the first step, the UE decides whether to offload the
The same goal as in previous paper is achieved in [84] by application(s) to the MEC or not. If the application is offloaded
means of an energy-efficient computation offloading (EECO) to the MEC, the MEC evaluates, in the second step, if it is able
algorithm. The EECO is divided into three stages. In the first to satisfy the request or if the computation should be farther
stage, the UEs are classified according to their time and energy relayed to the CC. The problem is formulated as a non-convex
cost features of the computation to: 1) the UEs that should quadratically constrained quadratic program (QCQP), which
offload the computation to the MEC as the UEs cannot sat- is, however, NP-hard. Hence, a heuristic algorithm based on
isfy the execution latency constraint, 2) the UEs that should a semi-definite relaxation together with a novel randomiza-
compute locally as they are able to process it by itself while tion method is proposed. The proposed heuristic algorithm is
the energy consumption is below a predefined threshold, and able to significantly lower a total system cost (i.e., weighted
3) the UEs that may offload the computation or not. In the sum of total energy consumption, execution delay and costs
second stage, the offloading priority is given to the UEs from to offload and process all applications) when compared to the
the first and the third set determined by their communica- situation if the computation is done always solely at the UE
tion channels and the computation requirements. In the third (roughly up to 70%) or always at the MEC/CC (approximately
stage, the eNBs/SCeNBs allocates radio resources to the UEs up to 58%).
with respect to given priorities. The computational complex- The extension of [86] from the single-UE to the multi-
ity of the EECO is O(max(I2 + N, IK + N)), where I is the UEs scenario is presented in [87]. Since the multiple UEs
number of iterations, N stands for amount of UEs, and K repre- are assumed to be connected to the same computing node
sents the number of available channels. According to presented (e.g., eNB), the offloading decision is done jointly with the
numerical results, the EECO is able to decrease the energy allocation of computing and communication resources to all
consumption by up to 15% when compared to the computation UEs. Analogously to [86], the proposal in [87] outperforms
without offloading. Further, it is proofed that with increasing the case when computation is done always by the UE (system
computational capabilities of the MEC, the number of UEs cost decreased by up to 45%) and strategy if computation is
deciding to offload the computation increases as well. always offloaded to the MEC/CC (system cost decreased by
3) Trade-Off Between Energy Consumption and Execution up to 50%). Still, it would be useful to show the results for
Delay: The computation offloading decision for the multi-user more realistic scenario with multiple computing eNBs, where
multi-channel environment considering a trade-off between interference among the UEs attached to different eNBs would
the energy consumption at the UE and the execution delay play an important role in the offloading decision. Moreover,
is proposed in [85]. Whether the offloading decision prefers the overall complexity of the proposed solution is O(N 6 ) per
to minimize energy consumption or execution delay is deter- one iteration, which could be too high for a high number of
mined by a weighing parameter. The main objective of the UEs (N) connected to the eNB.
paper is twofold; 1) choose if the UEs should perform the
offloading to the MEC or not depending on the weighing
parameter and 2) in case of the computation offloading, select B. Partial Offloading
the most appropriate wireless channel to be used for data This subsection focuses on the works dealing with the par-
transmission. To this end, the authors present an optimal cen- tial offloading. We classify the research on works focused
tralized solution that is, however, NP-hard in the multi-user on minimization of the energy consumption at the UE while
multi-channel environment. Consequently, the authors also predefined delay constraint is satisfied (Section V-B1) and
propose a distributed computation offloading algorithm achiev- works finding a proper trade-off between both the energy
ing Nash equilibrium. Both the optimal centralized solution consumption and the execution delay (Section V-B2).
and the distributed algorithm are compared in terms of two 1) Minimization of Energy Consumption While Satisfying
performance metrics; 1) the amount of the UEs for which the Execution Delay Constraint: This section focuses on the
computation offloading to the MEC is beneficial and 2) the works aiming on minimization of the energy consumption
computation overhead expressed by a weighing of the energy while satisfying maximum allowable delay, similarly as in
consumption and the execution delay. The distributed algo- Section V-A2. Cao et al. [88] consider the application divided
rithm performs only slightly worse than the centralized one into a non-offloadable part and N offloadable parts as shown
in both above-mentioned performance metrics. In addition, in Fig. 9b. The main objective of the paper is to decide, which
the distributed algorithm significantly outperforms the cases offloadable parts should be offloaded to the MEC. The authors
when all UEs compute all applications locally and when all propose an optimal adaptive algorithm based on a combina-
UEs prefer computing at the MEC (roughly by up to 40% torial optimization method with complexity up to O(2N ). To
for 50 UEs). decrease the complexity of the optimal algorithm, also a sub-
Other algorithm for the computation offloading decision optimal algorithm is proposed reducing complexity to O(N).
weighing the energy consumption at the UE and the exe- The optimal algorithm is able to achieve up to 48% energy sav-
cution delay is proposed in [86]. The main difference with ings while the sub-optimal one performs only slightly worse
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1640 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
(up to 47% energy savings). Moreover, it is shown that increas- computation offloading to the MEC. Contrary, if the UE has
ing SINR between the UE and the serving eNBs leads to more a lower priority than the threshold, it offloads only minimum
prominent energy savings. amount of computation to satisfy the application latency con-
The minimization of the energy consumption while satis- straints. Since the optimal joint allocation of communication
fying the delay constrains of the whole application is also and computation resources is of a high complexity, the authors
the main objective of [72]. Contrary to [88] the application also propose a sub-optimal allocation algorithm, which decou-
in [72] is supposed to be composed of several atomic parts ples communication and computation resource allocation. The
dependable on each other, i.e., some parts may be processed simulation results indicate this simplification leads to negligi-
only after execution of other parts as shown in Fig. 10 in bly higher total energy consumption of the UE when compared
Section IV. The authors formulate the offloading problem as to the optimal allocation. The paper is further extended in [92],
0 − 1 programming model, where 0 stands for the applica- where You et al. show that OFDMA access enables roughly
tion offloading and 1 represents the local computation at the ten times higher energy savings achieved by the UEs com-
UE. Nevertheless, the optimal solution is of a high complex- paring to TDMA system due to higher granularity of radio
ity as there exists 2N possible solutions to this problem (i.e., resources.
O(2N N 2 )). Hence, the heuristic algorithm exploiting Binary In all above-mentioned papers on partial offloading, the
Particle Swarm Optimizer (BPSO) [89] is proposed to reduce minimization of UE’s energy consumption depends on the
the complexity to O(G.K.N 2 ), where G is the number of iter- quality of radio communication channel and transmission
ations, and K is the number of particles. The BPSO algorithm power of the UE. Contrary, in [93], the minimization of
is able to achieve practically the same results as the high com- energy consumption while satisfying execution delay of the
plex optimal solution in terms of the energy consumption. application is accomplished through DVS technique. In this
Moreover, the partial offloading results in more significant respect, the authors propose an energy-optimal partial offload-
energy savings with respect to the full offloading (up to 25% ing scheme that forces the UE adapt its computing power
energy savings at the UE). depending on maximal allowed latency of the application
A drawback of both above papers focusing in detail on the (LMAX ). In other words, the objective of the proposed scheme
partial computation offloading is the assumption of only single is to guarantee that the actual latency of the application is
UE in the system. Hence, Zhao et al. [90] address the partial always equal to LMAX . As a consequence, the energy con-
offloading decision problem for the multi-UEs scenario. With sumption is minimized while perceived QoS by the users is
respect to [72] and [88], the application to be offloaded does not negatively affected.
not contain any non-offloadable parts and, in some extreme 2) Trade-Off Between Energy Consumption and Execution
cases, the whole application may be offloaded if profitable Delay: A trade-off analysis between the energy consumption
(i.e., the application is structured as illustrated in Fig. 9a. The and the execution delay for the partial offloading decision is
UEs are assumed to be able to determine whether to parti- delivered in [94]. Similarly as in [90], the application to be
tion the application and how many parts should be offloaded offloaded contains only offloadable parts and in extreme case,
to the MEC. The problem is formulated as a nonlinear con- the full offloading may occur (as explained in Section V-B).
straint problem of a high complexity. As a consequence, it is The offloading decision considers the following parameters:
simplified to the problem solvable by linear programming and 1) total number of bits to be processed, 2) computational capa-
resulting in the complexity O(N) (N is the number of UEs bilities of the UE and the MEC, 3) channel state between
performing the offloading). If the optimal solution applying the UE and the serving SCeNB that provides access to the
exhaustive search is used, 40% energy savings are achieved MEC, and 4) energy consumption of the UE. The computa-
when compared to the scenario with no offloading. In case of tion offloading decision is formulated as a joint optimization
the heuristic low complex algorithm, 30% savings are observed of communication and computation resources allocation. The
for the UEs. The disadvantage of the proposal is that it assumes simulation results indicate that the energy consumption at
the UEs in the system have the same channel quality and the UE decreases with increasing total execution time. This
all of them are of the same computing capabilities. These decrease, however, is notable only for small execution time
assumptions, however, are not realistic for the real network. duration. For a larger execution time, the gain in the energy
A multi-UEs scenario is also assumed in [91], where savings is inconsequential. Moreover, the authors show the
You and Huang assume TDMA based system where time is offloading is not profitable if the communication channel is
divided into slots with duration of T seconds. During each slot, of a low quality since a high amount of energy is spent to
the UEs may offload a part of their data to the MEC according offload the application. In such situation, the whole appli-
to their channel quality, local computing energy consump- cation is preferred to be processed locally at the UE. With
tion, and fairness among the UEs. In this regard, an optimal an intermediate channel quality, a part of the computation
resource allocation policy is defined giving higher priority to is offloaded to the MEC as this results in energy savings.
those UEs that are not able to meet the application latency con- Finally, if the channel is of a high quality, the full offloading
straints if the computation would be done locally. After that, is preferred since the energy consumption for data transmis-
the optimal resource allocation policy with threshold based sion is low while the savings accomplished by the computation
structure is proposed. In other words, the optimal policy makes offloading are high.
a binary offloading decision for each UE. If the UE has a The study in [95] provides more in-depth theoretical anal-
priority higher than a given threshold, the UE performs full ysis on trade-off between the energy consumption and the
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1641
TABLE III
C OMPARISON OF I NDIVIDUAL PAPERS A DDRESSING C OMPUTATION O FFLOADING D ECISIONS
latency of the offloaded applications preliminarily handled The main drawback in [94] and [95] is that these papers con-
in [94]. Moreover, the authors further demonstrate that a sider only the single-UE scenario. A trade-of analysis between
probability of the computation offloading is higher for good the energy consumption at the UE and the execution delay
channel quality. With higher number of antennas (4x2 MIMO for the multi-UEs scenario is delivered in [96]. In case of
and 4x4 MIMO is assumed), the offloading is done more often the multi-UEs scenario, the whole joint optimization process
and the energy savings at the UE are more significant when proposed in [95] has to be further modified since both com-
compared to SISO or MISO (up to 97% reduction of energy munication and computation resources provided by the MEC
consumption for 4x4 MIMO antenna configuration). Note that are shared among multiple UEs. In the paper, it is proven that
the same conclusion is also reached, e.g., in [81] and [82]. with more UEs in the system, it takes more time to offload the
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1642 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1643
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1644 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1645
TABLE IV
C OMPARISON OF I NDIVIDUAL PAPERS A DDRESSING A LLOCATION OF C OMPUTATION R ESOURCES FOR
A PPLICATION /DATA A LREADY D ECIDED TO B E O FFLOADED
is able to satisfy 100% of the UEs as long as number two heuristic approaches in terms of resource utilization by
of offloaded tasks per second is up to 6. Moreover, the roughly 10%.
paper shows that tasks parallelization helps to better balance
computation load.
The main objective to balance the load (both communication C. Summary of Works Dealing With Allocation of
and computation) among physical computing nodes and, at the Computing Resources
same time, to minimize the resource utilization of each phys- The comparison of individual methods addressing alloca-
ical computing node (i.e., reducing sum resource utilization) tion of the computation resources within the MEC is shown
is also considered in [107]. The overall problem is formu- in Table IV. The main objective of the studies dealing with the
lated as a placement of application graph onto a physical allocation of computation resources is to minimize the execu-
graph. The former represents the application where nodes in tion delay of the offloaded application (D). In other words the
graph correspond to individual components of the applica- aim is to ensure QoS to the UEs in order to fully exploit prox-
tion and edges to the communication requirements between imity of the MEC with respect to the computing in faraway
them. The latter represents physical computing system, where CC. Moreover, several studies also focus on minimization of
the nodes in graph are individual computing devices and the energy consumption of computing nodes (EC ). In addi-
edges stands for the capacity of the communication links tion, some limited effort has been focused on balancing of
between them (see the example of application and physi- computing and communication load to more easily satisfy the
cal graphs in Fig. 18 for the face recognition application). requirements on execution delay and/or to minimize overall
The authors firstly propose the algorithm finding the optimal resources utilization.
solution for the linear application graph and, then, more gen- A common drawback of all proposed solutions is that only
eral online approximation algorithms. The numerical results simulations are provided to demonstrate proposed solutions
demonstrate that the proposed algorithm is able to outperform for allocation of MEC computing resources. Moreover, all
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1646 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1647
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1648 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1649
TABLE V
C OMPARISON OF I NDIVIDUAL PAPERS F OCUSING ON M OBILITY M ANAGEMENT IN MEC
algorithm’s complexity is further decreased to O(I n ), where I an algorithm for the dynamic VM migration and the path
is the set of SCeNBs with sufficient radio/backhaul link qual- selection algorithm proposed in [123] further enhanced by
ity. It is shown that the proposed path selection algorithm is consideration of a mobility prediction. The first algorithm
able to reduce transmission delay by 54%. decides whether the VM migration should be initiated
The path selection algorithm contemplated or not based on the mobility prediction and the com-
in [122] and [123] may not be sufficient if the UE is too putation/communication load of the eNB(s). The second
far away from the computing location since increased trans- algorithm, then, finds the most suitable route for downloading
mission delay may result in QoS reduction notwithstanding. the offloaded data with the mobility prediction outcomes
Hence, Plachy et al. [124] suggest a cooperation between taken into account. The complexity of the first algorithm is
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1650 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
O(|Z||I|τ ) and the complexity of the second algorithm equals time, sending only small amount of data [82]. The rea-
to O(|I|τ ), where Z is the number of eNBs with sufficient son is that the energy spent by transmission/reception of
channel quality and computing capacity, and τ stands for the the offloaded computing is small while the energy savings
size of the prediction window. The proposed algorithm is achieved by the computation offloading are significant.
able reducing the average offloading time by 27% comparing Contrary, the applications that need to offload a lot
to the situation when the VM migration is performed after of data should be computed locally as the offloading
each conventional handover and by roughly 10% with respect simply does not pay off due to huge amount of energy
to [123]. spent by the offloading and high offloading delays.
• If the computing capacities at the MEC are fairly
D. Summary of Works Focused on Mobility Management limited, the probability to offload data for processing
is lowered. This is due to the fact that the probabilities
A comparison of the studies addressing the mobility issues
of the offloading and local processing are closely related
for the MEC is shown in Table V. As it can be observed
to the computation power available at the MEC.
from Table V, the majority of works so far focuses on the
• With more UEs in the system, the application offload-
VM migration. Basically, the related papers try to find an
ing as well as its processing at the MEC last
optimal decision policy whether the VM migration should be
longer [96]. Consequently, if there is high amount of UEs
initiated or not to minimize overall system cost (up to 32%
in the system, the local processing may be more prof-
and up to 50% reduction of average cost is achieved com-
itable, especially if the minimization of execution delay
pared to never and always migrate options, respectively [117]).
is the priority (such is the case of real-time applications).
Moreover, some papers aim to find a proper trade-off between
• The energy savings achieved by the computation
VM migration cost and VM migration gain [112], minimizing
offloading is strongly related to the radio access
execution delay [115], minimizing VM migration time [119],
technology used at radio link. To be more specific,
or maximizing overall throughput [120].
OFDMA enables significantly higher energy savings of
From Table V can be further observed that all papers dealing
the UEs than TDMA due to higher granularity of radio
with the VM migration assume the computation is done by a
resources [92].
single computing node. Although this option is less complex,
• The partial offloading can save significantly more
the parallel computation by more nodes should not be entirely
energy at the UE when compared to the full offload-
neglected as most of the papers focusing on the allocation of
ing [72]. Nevertheless, in order to perform the partial
computing resources assume multiple computing nodes (see
offloading, the application has to enable paralleliza-
Section VI-B).
tion/partitioning. Hence, the energy savings accom-
plished by computation offloading is also strongly
VIII. L ESSONS L EARNED related to the application type and the way how the
This section summarizes lessons learned from the state of code of the application is written.
the art focusing on computation offloading into the MEC. We From the surveyed papers focused on allocation of comput-
again address all three key items: decision on computation ing resources, the following key facts are learned:
offloading, allocation of computing resources, and mobility • The allocation of computation resources is strongly
management. related to the type of the application being offloaded
From the surveyed papers dealing with the decision on com- in a sense that only applications allowing paralleliza-
putation offloading, following key observations are derived: tion/partitioning may be distributed to multiple computing
• If the channel quality between the UE and its serv- nodes. Obviously, a proper parallelization and code
ing station is low, it is profitable to compute rather partitioning of the offloaded application can result
locally [95]. The main reason is that the energy spent in shorter execution delays as multiple nodes may
by the transmission/reception of the offloaded data is too pool their computing resources (up to 90% reduction
expensive in terms of the energy consumption at the UE. of execution delay when compared to single computing
Contrary, with increasing quality of the channel, it is node). On the other hand, the allocation of computation
better to delegate the computation to the MEC since resources for parallelized applications is significantly
the energy cost required for transmission/reception of the more complex.
offloaded data is reduced and it is easily outweighed • An increase in the number of computing nodes does
by the energy saving due to the remote computation. not have to result always in a reduction in the
Consequently, the computation can be offloaded more execution delay [102]. On the contrary, if the commu-
frequently if MIMO is exploited as it improves channel nication delay becomes predominant over the compu-
quality. Moreover, it is efficient to exploit connection tation delay, the overall execution delay may be even
through SCeNBs for the offloading as the SCeNBs are increased. Hence, a proper trade-off between the number
supposed to serve fewer users in proximity providing high of computing nodes and execution delay needs to be care-
channel quality and more available radio resources. fully considered when allocating computing resources to
• The most suitable applications for offloading are those offloaded data.
requiring high computational power (i.e., high com- • If the backhaul is of a low quality, it is mostly
putational demanding applications) and, at the same preferred to perform the computation locally by the
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1651
serving node (e.g., SCeNB/eNB) since the distribution algorithms should be employed with purpose to find
of data for computing is too costly in terms of the trans- the optimal path for delivery of the offloaded data
mission latency. Contrary, a high quality backhaul is back to the UEs while computing is done by the same
a prerequisite for an efficient offloading to multiple node(s) (i.e., without VM migration) [123]. However, if
computing nodes. the UE moves too far away from the computation
• The execution delay of the offloaded application placement, more robust mobility management based
depends not only on the backhaul quality, but also on on joint VM migration and path selection should be
a backhaul topology (e.g., mesh, ring, tree, etc.) [102]. adopted [124].
The mesh topology is the most advantageous in terms of
the execution delay since all computing nodes are con- IX. O PEN R ESEARCH C HALLENGES
nected directly and distribution of the offloaded data for AND F UTURE W ORK
computing is more convenient. On the other hand, mesh As shown in the previous sections, the MEC has attracted
topology would require huge investment in the backhaul. a lot of attention in recent years due to its ability to signif-
Finally, after surveying the papers addressing mobility icantly reduce energy consumption of the UEs while, at the
issues in the MEC, we list following key findings: same time, enabling real-time application offloading because
• There are several options of the UE’s mobility manage-
of proximity of computing resources to the users. Despite this
ment if the data/application is offloaded to the MEC. In fact the MEC is still rather immature technology and there are
cases of the low mobility, the power control at the many challenges that need to be addressed before its imple-
SCeNBs/eNBs side can be sufficient to handle mobility mentation into mobile network to be beneficial. This section
(up to 98% of offloaded applications can be successfully discusses several open research challenges not addressed by
delivered back to the UE [109]). This is true as long as the the current researcher.
adaption of transmission power enables keeping the UE at
the same serving station during the computation offload- A. Distribution and Management of MEC Resources
ing. However, if the UE performs handover, the power
control alone is not sufficient and the VM migration or In Section III, we have discussed several possible options
new communication path selection may be necessary to for placement of the computing nodes enabling the MEC
comply with requirements of offloaded applications in within the mobile network architecture. To guarantee ubiq-
terms of latency. uitous MEC services for all users wanting to utilize the MEC,
• A decision on VM migration depends strongly on three
the MEC servers and the computation/storage resource should
metrics: be distributed throughout whole network. Consequently, the
1) The VM migration cost (CostM ) representing the individual options where to physically place the MEC servers
time required for the service migration and the back- should complement each other in a hierarchical way. This will
haul resources spent by the transmission of VM(s) allow efficient usage of the computing resources while respect-
between the computing nodes. ing QoS and QoE requirements of the users. In this context,
2) The VM migration gain (GainM ) is the gain con- an important challenge is to find an optimal way where to
stituting delay reduction (data are computed in physically place the computation depending on expected users
proximity of the UE) and saving of the backhaul demands while, at the same time, consider related CAPEX and
resources (data does not have to be sent through OPEX (as initially tackled in [67] and [68]).
several nodes). Another missing topic in the literature is a design of effi-
3) The computing load of the node(s) to which the cient control procedures for proper management of the MEC
VM is reallocated since, in some situations, the opti- resources. This includes design of signalling messages, their
mal computing node for the VM migration may be exchange and optimization in terms of signalling overhead.
unavailable due to its high computation load. The control messages should be able to deliver status informa-
• The VM migration is impractical if huge amount of
tion, such as load of individual computing nodes and quality of
data needs to be transmitted between the comput- wireless/backhaul links in order to efficiently orchestrate com-
ing nodes and/or if the backhaul resources between puting resources within the MEC. There is a trade-off between
VMs are inadequate since it may take minutes or even high signalling overhead related to frequent exchange of the
hours to migrate whole VM. This is obviously too long status information and an impact on the MEC performance
for real-time services and it also implies significant load due to aging of the status information if these are exchanged
on backhaul, especially if the VM migration would need rarely. This trade-off have to be carefully analysed and effi-
to be performed frequently. Note that time consuming cient signalling mechanisms need to be proposed to ensure that
migration goes against the major benefit of the MEC, the control entities in the MEC have up to date information at
i.e., low latency resulting in suitability of the offloading their disposal while the cost to obtain them is minimized.
for real-time services.
• The minimization of the VM migration time can B. Offloading Decision
be done by reduction of the amount of migrated The offloading decision plays a crucial part as it basi-
data [119]. Nonetheless, even this option is not enough cally determines whether the computation would be performed
for real-time services. Thus, various path selection locally, remotely or jointly in both locations as discussed in
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1652 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Section V. All papers focusing on the offloading decision con- D. Mobility Management
sider only the energy consumption at the side of the UE. So far, the works focusing on mobility management and
However, to be in line with future green networking, also particularly on the VM migration consider mostly a scenario
the energy consumption at the MEC (including computation when only a single computing node (SCeNB or eNB) makes
as well as related communication) should be further taken computation for each UE. Hence, the challenge is how to
into account during the decision. Moreover, all papers dealing efficiently handle the VM migration procedure when appli-
with the offloading decision assume strictly static scenarios, cation is offloaded to several computing nodes. Moreover, the
i.e., the UEs are not moving before and during the offload- VM migration impose high load on the backhaul and leads
ing. Nevertheless, the energy necessary for transmission of to high delay, which makes it unsuitable for real-time appli-
the offloaded data can be significantly changed even during cations. Hence, new advanced techniques enabling very fast
offloading if channel quality drops due to low movement or VM migration in order of milliseconds should be developed.
fading. This can result in the situation when the offloading However, this alternative is very challenging due to commu-
may actually increase the energy consumption and/or exe- nication limits between computing nodes. Therefore, more
cution delay comparing to local computation. Hence, it is realistic challenge is how to pre-migrate the computation in
necessary to propose new advanced methods for the offloading advance (e.g., based on some prediction techniques) so that
decision, for instance, exploiting various prediction techniques there would be no service disruption observed by the users.
on the UEs mobility and channel quality during the offload- Despite of above-mentioned suggestions potentially reduc-
ing to better estimate how much the offloading will cost for ing VM migration time, stand-alone VM migration may
varying conditions. be unsuitable for real-time applications notwithstanding.
Besides, current papers focusing on the partial offloading Consequently, it is important to aim majority of research effort
decision disregard the option to offload individual parts to towards a cooperation of the individual techniques for mobility
multiple computing nodes. Multiple computing nodes enables management. In this regard, dynamic optimization and joint
higher flexibility and increases a probability that the offload- consideration of all techniques (such as power control, VM
ing to the MEC will be efficient for the UE (in terms of both migration, compression of migrated data, and/or path selec-
energy consumption and execution delay). Of course, a sig- tion) should be studied more closely in order to enhance QoE
nificant challenge in this scenario belongs to consideration of for the UEs and to optimize overall system performance for
backhaul between the MEC servers and ability to reflect their moving users.
varying load and parameters during the offloading decision.
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1653
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1654 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
[31] B.-G. Chun, S. Ihm, P. Maniatis, M. Naik, and A. Patti, “CloneCloud: [55] Z. Becvar et al., “Distributed architecture of 5G mobile networks
Elastic execution between mobile device and cloud,” in Proc. Eur. Conf. for efficient computation management in mobile edge computing,”
Comput. Syst. (Eurosys), Salzburg, Austria, 2011, pp. 301–314. in Chapter in 5G Radio Access Network (RAN)—Centralized RAN,
[32] S. Kosta, A. Aucinas, P. Hui, R. Mortier, and X. Zhang, “ThinkAir: Cloud-RAN and Virtualization of Small Cells, H. Venkataraman and
Dynamic resource allocation and parallel execution in the cloud for R. Trestian, Eds. Boca Raton, FL, USA: Taylor and Francis Group,
mobile code offloading,” in Proc. IEEE INFOCOM, Orlando, FL, USA, Mar. 2017.
2012, pp. 945–953. [56] S. Wang et al., “Mobile micro-cloud: Application classification, map-
[33] W. Zhang, Y. Wen, and D. O. Wu, “Energy-efficient scheduling policy ping, and deployment,” in Proc. Annu. Fall Meeting ITA (AMITA),
for collaborative execution in mobile cloud computing,” in Proc. IEEE New York, NY, USA, Oct. 2013, pp. 1–7.
INFOCOM, Turin, Italy, 2013, pp. 190–194. [57] K. Wang et al., “MobiScud: A fast moving personal cloud in the
[34] W. Zhang, Y. Wen, and D. O. Wu, “Collaborative task execution in mobile network,” in Proc. Workshop All Things Cellular Oper. Appl.
mobile cloud computing under a stochastic wireless channel,” IEEE Challenge, London, U.K., 2015, pp. 19–24.
Trans. Wireless Commun., vol. 14, no. 1, pp. 81–93, Jan. 2015. [58] A. Manzalini et al., “Towards 5G software-defined ecosystems:
[35] Y. Wen, W. Zhang, and H. Luo, “Energy-optimal mobile application Technical challenges, business sustainability and policy issues,” white
execution: Taming resource-poor mobile devices with cloud clones,” in paper, 2014.
Proc. IEEE INFOCOM, Orlando, FL, USA, 2012, pp. 2716–2720. [59] T. Taleb and A. Ksentini, “Follow me cloud: Interworking federated
[36] H. Flores et al., “Mobile code offloading: From concept to practice and clouds and distributed mobile networks,” IEEE Netw., vol. 27, no. 5,
beyond,” IEEE Commun. Mag., vol. 53, no. 3, pp. 80–88, Mar. 2015. pp. 12–19, Sep./Oct. 2013.
[37] L. Jiao et al., “Cloud-based computation offloading for mobile devices: [60] T. Taleb, A. Ksentini, and P. A. Frangoudis, “Follow-me cloud: When
State of the art, challenges and opportunities,” in Proc. Future Netw. cloud services follow mobile users,” IEEE Trans. Cloud Comput., to
Mobile Summit, Lisbon, Portugal, 2013, pp. 1–11. be published.
[38] ETSI, “Mobile edge computing (MEC): Technical requirements,” [61] A. Aissioui, A. Ksentini, and A. Gueroui, “An efficient elastic dis-
V1.1.1, Mar. 2016. tributed SDN controller for follow-me cloud,” in Proc. IEEE Int. Conf.
[39] M. T. Beck, M. Werner, S. Feld, and T. Schimper, “Mobile edge com- Wireless Mobile Comput. Netw. Commun. (WiMob), Abu Dhabi, UAE,
puting: A taxonomy,” in Proc. Int. Conf. Adv. Future Internet (AFIN), 2015, pp. 876–881.
Lisbon, Portugal, Nov. 2014, pp. 48–54. [62] J. Liu, T. Zhao, S. Zhou, Y. Cheng, and Z. Niu, “CONCERT: A cloud-
[40] N. Takahashi, H. Tanaka, and R. Kawamura, “Analysis of process based architecture for next-generation cellular systems,” IEEE Wireless
assignment in multi-tier mobile cloud computing and application to Commun., vol. 21, no. 6, pp. 14–22, Dec. 2014.
edge accelerated Web browsing,” in Proc. IEEE Int. Conf. Mobile [63] ETSI, “Mobile edge computing (MEC): Terminology,” V1.1.1,
Cloud Comput. Services Eng., San Francisco, CA, USA, 2015, Mar. 2016.
pp. 233–234. [64] ETSI, “Mobile edge computing (MEC): Proof of concept Framework,”
[41] Y. Zhang, H. Liu, L. Jiao, and X. Fu, “To offload or not to offload: V1.1.1, Mar. 2016.
An efficient code partition algorithm for mobile cloud computing,” in [65] ETSI, “Mobile edge computing (MEC): Service scenarios” V1.1.1,
Proc. 1st Int. Conf. Cloud Netw. (CLOUDNET), Paris, France, 2012, Mar. 2016.
pp. 80–86.
[66] ETSI, “Mobile edge computing (MEC): Framework and Reference
[42] J. Dolezal, Z. Becvar, and T. Zeman, “Performance evaluation of com-
Architecture,” V1.1.1, Mar. 2016.
putation offloading from mobile device to the edge of mobile network,”
[67] A. Ceselli, M. Premoli, and S. Secci, “Cloudlet network design
in Proc. IEEE Conf. Stand. Commun. Netw. (CSCN), Berlin, Germany,
optimization,” in Proc. IFIP Netw., Toulouse, France, 2015, pp. 1–9.
2016, pp. 1–7.
[68] A. Ceselli, M. Premoli, and S. Secci, “Mobile edge cloud network
[43] O. Salman, I. Elhajj, A. Kayssi, and A. Chehab, “Edge computing
design optimization,” IEEE/ACM Trans. Netw., to be published.
enabling the Internet of Things,” in Proc. IEEE 2nd World Forum
Internet Things (WF IoT), Milan, Italy, 2015, pp. 603–608. [69] D. Kreutz et al., “Software-defined networking: A comprehensive
[44] S. Abdelwahab, B. Hamdaoui, M. Guizani, and T. Znati, “REPLISOM: survey,” Proc. IEEE, vol. 103, no. 1, pp. 14–76, Jan. 2015.
Disciplined tiny memory replication for massive IoT devices in LTE [70] N. A. Jagadeesan and B. Krishnamachari, “Software-defined
edge cloud,” IEEE Internet Things J., vol. 3, no. 3, pp. 327–338, networking paradigms in wireless networks: A survey,” ACM Comput.
Jun. 2016. Surveys, vol. 47, no. 2, Jan. 2015, Art. no. 27.
[45] X. Sun and N. Ansari, “EdgeIoT: Mobile edge computing for the [71] X. Jin, L. E. Li, L. Vanbever, and J. Rexford, “SoftCell: Scalable
Internet of Things,” IEEE Commun. Mag., vol. 54, no. 12, pp. 22–29, and flexible cellular core network architecture,” in Proc. ACM Conf.
Dec. 2016. Emerg. Netw. Exp. Technol. (CoNEXT), Santa Barbara, CA, USA, 2013,
[46] NOKIA: Multi-Access Edge Computing. Accessed on pp. 163–174.
Mar. 20, 2017. [Online]. Available: https://networks.nokia.com/ [72] M. Deng, H. Tian, and B. Fan, “Fine-granularity based application
solutions/multi-access-edge-computing offloading policy in cloud-enhanced small cell networks,” in Proc.
[47] NOKIA: Mobile Edge Computing. Accessed on Mar. 20, 2017. IEEE Int. Conf. Commun. Workshops (ICC), Kuala Lumpur, Malaysia,
[Online]. Available: http://resources.alcatel-lucent.com/asset/200546 2016, pp. 638–643.
[48] (2012). FP7 European Project, Distributed Computing, Storage and [73] S. E. Mahmoodi, R. N. Uma, and K. P. Subbalakshmi, “Optimal joint
Radio Resource Allocation Over Cooperative Femtocells (TROPIC). scheduling and cloud offloading for mobile applications,” IEEE Trans.
[Online]. Available: http://www.ict-tropic.eu/ Cloud Comput., to be published.
[49] (2015). H2020 European Project, Small cEllS coordinAtion for [74] J. Liu, Y. Mao, J. Zhang, and K. B. Letaief, “Delay-optimal com-
Multi-tenancy and edge services (SESAM). [Online]. Available: putation task scheduling for mobile-edge computing systems,” in
http://www.sesame-h2020-5g-ppp.eu/ Proc. IEEE Int. Symp. Inf. Theory (ISIT), Barcelona, Spain, 2016,
[50] I. Giannoulakis et al., “The emergence of operator-neutral small cells pp. 1451–1455.
as a strong case for cloud computing at the mobile edge,” Trans. Emerg. [75] Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading
Telecommun. Technol., vol. 27, no. 9, pp. 1152–1159, 2016. for mobile-edge computing with energy harvesting devices,” IEEE J.
[51] M. Chiosi et al., “Network functions virtualisation: An introduction, Sel. Areas Commun., vol. 34, no. 12, pp. 3590–3605, Dec. 2016.
benefits, enablers, challenges & call for action,” Introductory white [76] W. Zhang et al., “Energy-optimal mobile cloud computing under
paper, 2012. stochastic wireless channel,” IEEE Trans. Wireless Commun., vol. 12,
[52] ETSI, “Network function virtualisation (NFV): Architectural frame- no. 9, pp. 4569–4581, Sep. 2013.
work,” V1.1.1, Oct. 2013. [77] S. Ulukus et al., “Energy harvesting wireless communications:
[53] F. Lobillo et al., “An architecture for mobile computation offloading A review of recent advances,” IEEE J. Sel. Areas Commun., vol. 33,
on cloud-enabled LTE small cells,” in Proc. Workshop Cloud Technol. no. 3, pp. 360–381, Mar. 2015.
Energy Efficiency Mobile Commun. Netw. (IEEE WCNCW), Istanbul, [78] M. Kamoun, W. Labidi, and M. Sarkiss, “Joint resource allocation and
Turkey, 2014, pp. 1–6. offloading strategies in cloud enabled cellular networks,” in Proc. IEEE
[54] M. A. Puente, Z. Becvar, M. Rohlik, F. Lobillo, and E. C. Strinati, Int. Conf. Commun. (ICC), London, U.K., 2015, pp. 5529–5534.
“A seamless integration of computationally-enhanced base sta- [79] W. Labidi, M. Sarkiss, and M. Kamoun, “Energy-optimal resource
tions into mobile networks towards 5G,” in Proc. IEEE Veh. scheduling and computation offloading in small cell networks,” in
Technol. Conf. Workshops (IEEE VTC Spring), Glasgow, U.K., 2015, Proc. Int. Conf. Telecommun. (ICT), Sydney, NSW, Australia, 2015,
pp. 1–5. pp. 313–318.
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
MACH AND BECVAR: MEC: SURVEY ON ARCHITECTURE AND COMPUTATION OFFLOADING 1655
[80] W. Labidi, M. Sarkiss, and M. Kamoun, “Joint multi-user resource [101] S. M. S. Tanzil, O. N. Gharehshiran, and V. Krishnamurthy, “Femto-
scheduling and computation offloading in small cell networks,” in Proc. cloud formation: A coalitional game-theoretic approach,” in Proc. IEEE
IEEE Int. Conf. Wireless Mobile Comput. Netw. Commun. (WiMob), Glob. Commun. Conf. (GLOBECOM), San Diego, CA, USA, 2015,
Abu Dhabi, UAE, 2015, pp. 794–801. pp. 1–6.
[81] S. Barbarossa, S. Sardellitti, and P. Di Lorenzo, “Joint allocation of [102] J. Oueis, E. Calvanese-Strinati, A. De Domenico, and S. Barbarossa,
computation and communication resources in multiuser mobile cloud “On the impact of backhaul network on distributed cloud computing,”
computing,” in Proc. IEEE Workshop Signal Process. Adv. Wireless in Proc. IEEE Wireless Commun. Netw. Conf. Workshops (WCNCW),
Commun. (SPAWC), Darmstadt, Germany, 2013, pp. 26–30. Istanbul, Turkey, 2014, pp. 12–17.
[82] S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of [103] J. Oueis, E. C. Strinati, and S. Barbarossa, “Small cell clustering for
radio and computational resources for multicell mobile cloud com- efficient distributed cloud computing,” in Proc. IEEE Annu. Int. Symp.
puting,” in Proc. IEEE Int. Workshop Signal Process. Adv. Wireless Pers. Indoor Mobile Radio Commun. (PIMRC), Washington, DC, USA,
Commun. (SPAWC), Toronto, ON, Canada, 2014, pp. 354–358. 2014, pp. 1474–1479.
[83] S. Sardellitti, S. Barbarossa, and G. Scutari, “Distributed mobile cloud [104] J. Oueis, E. C. Strinati, S. Sardellitti, and S. Barbarossa, “Small cell
computing: Joint optimization of radio and computational resources,” clustering for efficient distributed fog computing: A multi-user case,” in
in Proc. IEEE Globecom Workshops (GC Wkshps), Austin, TX, USA, Proc. IEEE Veh. Technol. Conf. (VTC Fall), Boston, MA, USA, 2015,
2014, pp. 1505–1510. pp. 1–5.
[84] K. Zhang et al., “Energy-efficient offloading for mobile edge computing [105] J. Oueis, E. C. Strinati, and S. Barbarossa, “The fog balancing: Load
in 5G heterogeneous networks,” IEEE Access, vol. 4, pp. 5896–5907, distribution for small cell cloud computing,” in Proc. IEEE 81st Veh.
2016. Technol. Conf. (VTC Spring), Glasgow, U.K., 2015, pp. 1–6.
[85] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation [106] M. Vondra and Z. Becvar, “QoS-ensuring distribution of computation
offloading for mobile-edge cloud computing,” IEEE/ACM Trans. Netw., load among cloud-enabled small cells,” in Proc. IEEE Int. Conf. Cloud
vol. 24, no. 5, pp. 2795–2808, Oct. 2016. Netw. (CloudNet), 2014, pp. 197–203.
[86] M.-H. Chen, B. Liang, and M. Dong, “A semidefinite relax- [107] S. Wang, M. Zafer, and K. K. Leung, “Online placement of
ation approach to mobile cloud offloading with computing access multi-component applications in edge computing environments,” IEEE
point,” in Proc. IEEE Int. Workshop Signal Process. Adv. Wireless Access, vol. 5, pp. 2514–2533, 2017.
Commun. (SPAWC), Stockholm, Sweden, 2015, pp. 186–190. [108] P. Mach and Z. Becvar, “Cloud-aware power control for cloud-enabled
[87] M.-H. Chen, M. Dong, and B. Liang, “Joint offloading decision and small cells,” in Proc. IEEE Globecom Workshops (GC Wkshps), Austin,
resource allocation for mobile cloud with computing access point,” TX, USA, 2014, pp. 1038–1043.
in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), [109] P. Mach and Z. Becvar, “Cloud-aware power control for real-time
Pudong, China, 2016, pp. 3516–3520. application offloading in mobile edge computing,” Trans. Emerg.
[88] S. Cao, X. Tao, Y. Hou, and Q. Cui, “An energy-optimal offloading Telecommun. Technol., vol. 27, no. 5, pp. 648–661, 2016.
algorithm of mobile computing based on HetNets,” in Proc. Int. Conf. [110] T. Taleb and A. Ksentini, “An analytical model for follow me cloud,” in
Connected Veh. Expo (ICCVE), Shenzhen, China, 2015, pp. 254–258. Proc. IEEE Glob. Commun. Conf. (GLOBECOM), Atlanta, GA, USA,
[89] J. Kennedy and R. C. Eberhart, “A discrete binary version of the par- 2013, pp. 1291–1296.
ticle swarm algorithm,” in Proc. IEEE Int. Conf. Syst. Man Cybern., [111] A. Ksentini, T. Taleb, and M. Chen, “A Markov decision process-based
Orlando, FL, USA, 1997, pp. 4104–4108. service migration procedure for follow me cloud,” in Proc. IEEE Int.
[90] Y. Zhao, S. Zhou, T. Zhao, and Z. Niu, “Energy-efficient task offloading Conf. Commun. (ICC), Sydney, NSW, Australia, 2014, pp. 1350–1354.
for multiuser mobile cloud computing,” in Proc. IEEE/CIC Int. Conf. [112] X. Sun and N. Ansari, “PRIMAL: PRofIt Maximization Avatar
Commun. China (ICCC), Shenzhen, China, 2015, pp. 1–5. pLacement for Mobile edge computing,” in Proc. IEEE Int. Conf.
[91] C. You and K. Huang, “Multiuser resource allocation for mobile- Commun. (ICC), Kuala Lumpur, Malaysia, 2016, pp. 1–6.
edge computation offloading,” in Proc. IEEE Glob. Commun. [113] S. Wang et al., “Mobility-induced service migration in mobile micro-
Conf. (GLOBECOM), Washington, DC, USA, 2016, pp. 1–6. clouds,” in Proc. IEEE Mil. Commun. Conf., Baltimore, MD, USA,
[92] C. You, K. Huang, H. Chae, and B.-H. Kim, “Energy-efficient 2014, pp. 835–840.
resource allocation for mobile-edge computation offloading,” [114] S. Wang et al., “Dynamic service migration in mobile edge-clouds,” in
IEEE Trans. Wireless Commun., vol. 16, no. 3, pp. 1397–1411, Proc. Netw. Conf. (IFIP Networking), Toulouse, France, 2015, pp. 1–9.
Mar. 2017. [115] A. Nadembega, A. S. Hafid, and R. Brisebois, “Mobility prediction
[93] Y. Wang, M. Sheng, X. Wang, L. Wang, and J. Li, “Mobile-edge model-based service migration procedure for follow me cloud to
computing: Partial computation offloading using dynamic voltage support QoS and QoE,” in Proc. IEEE Int. Conf. Commun. (ICC),
scaling,” IEEE Trans. Commun., vol. 64, no. 10, pp. 4268–4282, Kuala Lumpur, Malaysia, 2016, pp. 1–6.
Oct. 2016. [116] S. Wang et al., “Dynamic service placement for mobile micro-clouds
[94] O. Muñoz, A. Pascual-Iserte, and J. Vidal, “Joint allocation of radio and with predicted future costs,” in Proc. IEEE Int. Conf. Commun. (ICC),
computational resources in wireless application offloading,” in Proc. London, U.K., 2015, pp. 5504–5510.
Future Netw. Mobile Summit, Lisbon, Portugal, 2013, pp. 1–10. [117] S. Wang et al., “Dynamic service placement for mobile micro-clouds
[95] O. Muñoz, A. Pascual-Iserte, and J. Vidal, “Optimization of radio and with predicted future costs,” IEEE Trans. Parallel Distrib. Syst., vol. 28,
computational resources for energy efficiency in latency-constrained no. 4, pp. 1002–1016, Apr. 2017.
application offloading,” IEEE Trans. Veh. Technol., vol. 64, no. 10, [118] R. Urgaonkar et al., “Dynamic service migration and workload schedul-
pp. 4738–4755, Oct. 2015. ing in edge-clouds,” Perform. Eval., vol. 91, pp. 205–228, Sep. 2015.
[96] O. Muñoz, A. Pascual-Iserte, J. Vidal, and M. Molina, “Energy-latency [119] K. Ha et al., “Adaptive VM handoff across cloudlets,” Dept. Comput.
trade-off for multiuser wireless computation offloading,” in Proc. IEEE Sci., Carnegie Mellon Univ., Pittsburgh, PA, USA, Tech. Rep.
Wireless Commun. Netw. Conf. Workshops (WCNCW), Istanbul, Turkey, CMU-CS-15-113, Jun. 2015.
2014, pp. 29–33. [120] S. Secci, P. Raad, and P. Gallard, “Linking virtual machine mobility
[97] Y. Mao, J. Zhang, S. H. Song, and K. B. Letaief, “Power-delay tradeoff to user mobility,” IEEE Trans. Netw. Service Manag., vol. 13, no. 4,
in multi-user mobile-edge computing systems,” in Proc. IEEE Glob. pp. 927–940, Dec. 2016.
Commun. Conf. (GLOBECOM), Washington, DC, USA, Dec. 2016, [121] D. Farinacci, V. Fuller, D. Meyer, and D. Lewis, “The locator/ID sepa-
pp. 1–6. ration protocol (LISP),” Internet Eng. Task Force, Fremont, CA, USA,
[98] T. Zhao, S. Zhou, X. Guo, Y. Zhao, and Z. Niu, “A cooperative schedul- RFC 6830, 2013.
ing scheme of local cloud and Internet cloud for delay-aware mobile [122] Z. Becvar, J. Plachy, and P. Mach, “Path selection using handover in
cloud computing,” in Proc. IEEE Globecom Workshops (GC Wkshps), mobile networks with cloud-enabled small cells,” in Proc. IEEE Int.
San Diego, CA, USA, 2015, pp. 1–6. Symp. Pers. Indoor Mobile Radio Commun. (PIMRC), Washington, DC,
[99] X. Guo, R. Singh, T. Zhao, and Z. Niu, “An index based task assign- USA, 2014, pp. 1480–1485.
ment policy for achieving optimal power-delay tradeoff in edge cloud [123] J. Plachy, Z. Becvar, and P. Mach, “Path selection enabling user mobil-
systems,” in Proc. IEEE Int. Conf. Commun. (ICC), Kuala Lumpur, ity and efficient distribution of data for computation at the edge of
Malaysia, 2016, pp. 1–7. mobile network,” Comput. Netw., vol. 108, pp. 357–370, Oct. 2016.
[100] V. Di Valerio and F. L. Presti, “Optimal virtual machines allocation [124] J. Plachy, Z. Becvar, and E. C. Strinati, “Dynamic resource allo-
in mobile femto-cloud computing: An MDP approach,” in Proc. IEEE cation exploiting mobility prediction in mobile edge computing,” in
Wireless Commun. Netw. Conf. Workshops (WCNCW), Istanbul, Turkey, Proc. IEEE Int. Symp. Pers. Indoor Mobile Radio Commun. (PIMRC),
2014, pp. 7–11. Valencia, Spain, 2016, pp. 1–6.
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.
1656 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 19, NO. 3, THIRD QUARTER 2017
Pavel Mach (M’10) received the M.Sc. and Ph.D. Zdenek Becvar (M’10) received the M.Sc. and
degrees from the Czech Technical University in, Ph.D. degrees in telecommunication engineering
Prague, Czech Republic, in 2006 and 2010, respec- from the Czech Technical University, Prague,
tively, where he is currently a Senior Researcher Czech Republic, in 2005 and 2010, respectively,
with the Department of Telecommunication where he is currently an Associate Professor with
Engineering. He has been actively involved in the Department of Telecommunication Engineering.
several national and international projects founded From 2006 to 2007, he joined Sitronics Research and
by European Commission. In 2015, he has joined Development Center, Prague, focusing on speech
5G mobile research laboratory funded at Czech quality in VoIP. Furthermore, he was involved
Technical University focusing on key aspects and in research activities of Vodafone Research and
challenges related to future mobile networks and Development Center, Czech Technical University in
emerging wireless technologies. He has co-authored over 50 papers in Prague in 2009. He was on internships with Budapest Politechnic, Hungary in
international journals and conferences. His research interests include radio 2007, CEA-Leti, France, in 2013, and EURECOM, France, in 2016. In 2013,
resource management in emerging wireless technologies, device-to-device he became a representative of the Czech Technical University in ETSI and
communication, cognitive radio, and mobile edge computing. 3GPP standardization organizations. In 2015, he founded 5G Mobile Research
Laboratory, CTU, Prague, focusing on research toward 5G mobile networks
and beyond. He is a member of over 15 program committees at international
conferences or workshops and he has published three book chapters and over
60 conference or journal papers. He works on development of solutions for
future mobile networks (5G and beyond) with special focus on optimization
of radio resource management, mobility support, device-to-device communi-
cation, self-optimization, power control, architecture of radio access network,
and small cells.
Authorized licensed use limited to: Maulana Abul Kalam Azad Univ of Tech - Kolkata. Downloaded on May 21,2024 at 08:59:51 UTC from IEEE Xplore. Restrictions apply.