A_Layered_Strategy_for_Reducing_Offloading_Latency_in_Fog_Computing
A_Layered_Strategy_for_Reducing_Offloading_Latency_in_Fog_Computing
Ahmed Chebaane©1, Muhammad Kamran Arshad2, Florian Burger1, and Abdelmajid Khelil©1
1Institute fo r Data and Process Science, CS Department, Landshut University o f Applied Sciences, Landshut, Germany
2CS Department, University o f Salerno, Fisciano, Italy
{ahmed.chebaane,khelil} @haw-landshut.de, [email protected], [email protected]
Abstract—Moving applications from the local Internet of application scenarios [4]. Accordingly, there is a need to design a
Things devices to neighboring infrastructures has been the practical approach that enables offloading a time-critical application or
highlight of the current distributed Computing era. Due to a selected set of its tasks with minimal latency from the user devices
high latency and network bottlenecks, Cloud Computing has to the neighboring Fog nodes.
become rather problematic for real-time applications. Accordingly, The underlying communication infrastructure plays a key role
Fog Computing has been presented as the missing piece to in enabling timely task offloading, i.e., by providing a suitable
provide along with cloud Computing for virtually all applications. communication link to the selected destination Fog node. The fifth
Application virtualization through containers has become more generation (5G) [5] and the six generation (6G) [6] of cellular
and more popular in the context of Fog Computing as it simplifies communication and LoRa communication [7] has gained exceptional
live migration and, in particular, offloading of time-critical attention by industrial and academic researchers where the network
applications or their selected tasks. However, the measurement bandwidth range between Kbps and Gbps. This study considers the
study of the overall offloading latency in Fog Computing has not network bandwidth variation between Mbps and Gbps.
been sufficiently investigated. This article proposes a measurement Live service migration approaches have recently been proposed
analysis of the offloading latency of time-critical application tasks with the help of the Checkpoint/Restore In Userspace (CRIU)
from the end device, such as a vehicle, to surrounding Fog software tool [8], where the current status of a running application
nodes based on Docker containers, Kubernetes, and checkpointing. along with the Operating System (OS) can easily be recorded and
Based on our layered offloading framework, several offloading transferred from one Fog node to a neighboring Fog node. Therefore,
scenarios have been studied. The experimental results show that the Fog-to-Fog migration approaches can be crucial for optimizing the
by preparing the Fog nodes in advance through careful, easy offloading latency from the user device (e.g., vehicle) to the Fog node.
arrangements, considerable time can be saved when offloading
the application to the Fog node within the Fog cluster. We further We follow an application model that is commonly used in
consider Vehicular Fog Computing (VFC) as a popular example Fog Computing as well as other distributed embedded systems
of time-critical Fog computing and assess the suitability of various communities. An application is represented as a directed acyclic
offloading strategies for VFC use cases. flow graph of tasks [9]. A task could be a Function as a Service
Keywords—Fog Computing; Vehicular Fog Computing; Task Of (FaaS) [10] or a microservice [11]. The root task is executed on the
floading; Checkpoints; Application Virtualization; Latency-Critical application initiator (e.g., the vehicle that starts the application). The
Applications. remaining tasks can be either executed locally or on surrounding
Fog nodes. A task is usually represented by an application container
I. I n t r o d u c t io n
along with its dependencies and the task execution deadline.
Over the past few years, Fog Computing has undoubtedly become a Kubernetes [12], as an open-source orchestration framework, aims
promising approach for real-time applications/services. This paradigm to manage application containers in different environments, such as
has been introduced in 2012 by both industry and academia [1] to physical machines, virtual machines, or Cloud/Fog environments.
bring Cloud Computing into the edge of the network. Instead of In this work, we use a Kubernetes distribution named Mikrok8s.
centrally running analysis, processing, and storage functions in the This lightweight distribution is particularly well suited for IoT and
Cloud, they are now decentralized and running on Gateways/Fog edge devices [13], A pod is a wrapper around a container that is
nodes very close to the end-user devices. Thus decreasing latency, identifiable by its IP address. One or more containers can be running
increasing predictability, and preserving data locality and privacy. in a pod [14],
Accordingly, this computing architecture is more suitable for time- Recently, several research studies focused more on minimizing
sensitive applications such as Vehicle-to-Everything (V2X) [2]. energy consumption, maximizing QoS, and minimizing transmission
Time-sensitive vehicular applications (e.g., Vulnerable Road Users delay of time-sensitive vehicular applications among Fog-to-Fog
(VRUs), obstacle detection, autonomous driving, collision warning, or Device-to-Fog. However, there is a notable lack of extensive
etc. [3]) could be executed completely on the onboard computers systematic study of the offloading latency in VFC. Therefore, we
of a few vehicles. However, due to limited onboard resources, most propose a comprehensive measurement analysis of the overall
vehicles will not be able to process the data onboard while meeting offloading latency in time-sensitive Fog Computing. This experimental
the deadline. For this purpose, the application (and its data) should study extends our previous work [15] through the following new
be offloaded with minimal delay and the highest reliability to the contributions. We first measure our layered offloading framework [15]
surrounding Fog infrastructure. to get an accurate overview of which steps in the offloading process
A Cloud-based solution is not suitable as the latency of the data take how long. In addition, we establish a Kubernetes clustering
transfer is insufficiently deterministic and may become intolerable. approach to analyze the minimum time needed for prefetching
Since Fog nodes are very close to the point of use, the transmission time-sensitive application tasks from the initiator vehicle to the
time is minimized. Fog Computing has a clear advantage for such surrounding Fog nodes. Based on our evaluation, we assess the
Authorized licensed use limited to: Hochschule Landshut. Downloaded on December 02,2024 at 09:23:02 UTC from IEEE Xplore. Restrictions apply.
979-8-3503-6648-8/24/$31.00 ©2024 IEEE 176
2024 9th International Conference on Fog and Mobile Edge Computing (FMEC)
suitability of various offloading cases for modern real-time vehicular possible cases and profile the offloading time needed. This allows us
applications depending on their timeliness requirements. to identify which cases are suitable for which VFC applications.
Our experimental results confirm that using the layered framework
and preparing the Fog nodes through the Kubernetes cluster while A. Methodology
respecting the minimal prefetching time achieves a sufficient task Our contribution aims to measure the required time for time-critical
offloading time in VFC. Hence, our proposed solution could improve offloading in (V)FC through the three different scenarios illustrated
road safety and traffic efficiency in the automotive field. However, our in Figure 1.
detailed experimental results could be used for similar assessments in In the first scenario (Subsection III-B), Vehicle VI on-demand
the field of Industry 4.0, smart grid, etc. authenticates to its direct Fog nodes and offloads its application
The remainder of this paper is organized as follows. Section II fragments to the Fog node without reserving resources in advance.
discusses related work and highlights the research gap. In Section III, Obviously, it is not guaranteed that VI gets sufficient resources
we describe our experimental studies for measuring the offloading time. allocated to its application. However, in this experimental scenario,
Section IV provides the experimental results. Section V concludes we assume VI gets the required resources allocated.
the paper and presents relevant future work. In subsection III-C, we consider that Vehicle VI proactively
authenticates to the Fog node before a real offloading demand occurs.
II. S tate of the A rt This allows us to measure the authentication time overhead.
Several measurement studies [16],[17],[18],[19] cope with the The third scenario in Subsection III-D assumes that Vehicle V2
performance of Fog Computing in terms of the overall execution time, proactively reserves the desired resources for its VFC application from
QoS, energy consumption and network utilization. Yoon et al. [20] the Kubernetes cluster to prepare the resources at the appropriate Fog
propose a measurement analysis of intra-host communication and node in advance. Then, V2 offloads its application fragments once it
inter-host communication within various Container Network Interface enters the coverage of the reserved Fog node (Fog 5). This scenario
(CNI) network components to find out the best-performed model for allows us to measure the time required by Kubernetes to prepare the
MEC network with container-on-VM (CoV) architecture. Yong et required resources in advance. This time will be necessary to fix the
al. [21] propose a long-term city-wide measurement of the wireless minimal time needed to issue a reservation.
access latency between the moving vehicle and the Fog Computing
systems. Based on the measurement, they provide a novel networking
B. Study 1: Offloading with Explicit Authentication and without
and server adaptation framework named AdaptiveFog for vehicles Reservation
to dynamically switch between Mobile Network Operators (MNO) Regularly, Fog nodes broadcast/advertise information about their
network and Fog/Cloud servers in order to optimize the latency available components from the layered model to vehicles in their
performance of LTE-based Fog Computing systems. In [18] , the coverage area. Fog nodes keep listening to vehicle requests. When
authors present a measurement study on the Computing performance a vehicle requires offloading, it checks the container state of its
of the migration tasks from an autonomous Unnamed Aerial Vehicle application container. Upon checkpointing, it authenticates to the
(UAV) and available edge server. Based on two key communication target Fog node. Next, only the missing parts of the container are
technologies (WIFI and LTE), the analysis focuses on the spatio- offloaded using incremental file synchronization rsync. The receiver
temporal characteristics of the capture-to-control time in a real-world Fog node performs the application fragments and immediately sends
platform. [22] reports a five-month measurement analysis of the the result to the vehicle.
wireless access latency among vehicles and Fog Computing system As illustrated in Figure 2, comparing the layered framework based
supported by a multi-operator LTE network. Based on the study, the on the three layers model and the Docker components shows how
authors propose a novel optimization framework named AdaptiveFog the individual layers of a Docker image or container can be matched
to help smart vehicles dynamically switch between the MNOs and with the layered framework. Accordingly, we define four offloading
Fog or Cloud servers. cases based on the Docker concept:
Most of these offloading techniques focus on determining the • Case 1: Offloading of task data only
optimal placement of task offloading on Fog nodes following the • Case 2: Offloading of task container
main objective of minimizing power consumption on the user device • Case 3: Offloading of task container and Docker Image
without sacrificing the required quality of service. However, these • Case 4: Offloading of task container. Docker Image and Base
projects insufficiently address time-critical applications. In addition, Image
almost all of the measurement studies in vehicular networks focus on
Our layered framework is based on these four offloading cases to
optimizing the network latency and energy consumption. Nevertheless,
check the time difference between the various layers of the initiator
experimental analysis of the overall offloading latency for time-critical
and the target Fog node. Once the system points to the missing layer
vehicular applications needs more investigation. In this paper, we focus
of the target Fog node, the service offloading starts the process from
on optimizing the overall offloading time between the user device
the concerning case. Each case covers a specific operation.
and Fog nodes based on intensive experimental measurements. We
In Case 1, the container layer, including the FS data, needs to
achieve this through a greedy approach concerning the data to be
be transferred from the source node to the target node using rsync.
transferred, which indirectly saves network bandwidth and energy
Case 2 arises when a container of the destination Fog node does not
overhead on user devices.
exist, but the Docker image is available. Therefore, a container based
on the existing image must be created on the target Fog node. The
III. M ethodology and E x p e r im e n t a l S t u d ie s
remaining necessary steps are the same as in Case 1. Accordingly, the
Our ultimate goal is to develop an efficient approach for optimizing in-memory status, including the FS data, is offloaded from the source
the time-critical offloading of Docker containers in (V)FC. To achieve to the destination. Case 3 aims to add the appropriate image layer
that, we experimentally investigate a layered framework technique to the Base image, then create a container, and finally synchronize
to show how far layer-based offloading can be applied to Docker the in-memory status of the target Fog node from the initiator, which
containers. The core idea of this approach is to fragment the entire corresponds to Case 2. Case 4 applies when there is no Base image, a
application and services into their individual components and organize suitable Docker image, or a container. Thus, the Base image should be
them into several layers. Several cases are possible depending on what provided first, then followed by the steps of Case 3. It is noteworthy
needs to be offloaded and what resources are already available on that Case 4 is not suitable for time-critical applications due to the large
the offloading target Fog node. In this work, we consider the various size of the base image, which increases the overall offloading time.
Authorized licensed use limited to: Hochschule Landshut. Downloaded on December 02,2024 at 09:23:02 UTC from IEEE Xplore. Restrictions apply.
177
2024 9th International Conference on Fog and Mobile Edge Computing (FMEC)
microk8s Cluster
Authorized licensed use limited to: Hochschule Landshut. Downloaded on December 02,2024 at 09:23:02 UTC from IEEE Xplore. Restrictions apply.
178
2024 9th International Conference on Fog and Mobile Edge Computing (FMEC)
Authorized licensed use limited to: Hochschule Landshut. Downloaded on December 02,2024 at 09:23:02 UTC from IEEE Xplore. Restrictions apply.
179
2024 9th International Conference on Fog and Mobile Edge Computing (FMEC)
We repeated the exact measurements using Alpine Base image with TABLE I: Benchmarking of Vehicular Applications [4]
the simulated link (Figure 5 (a)) and the Ethernet network (Figure 5
(b)), which is a very compact base image (7.8 MB memory size) and Timeliness Bandwidth Acceptable Of
App Class
usually represents a meaningful alternative to larger base images. Require require floading
As confirmed by the measurements, the overall offloading time is ment ment
generally reduced in all existing cases compared to the Debian Base
image (which contains 55.4 MB of memory). In the ’’worst-case”, Cooperative Few 100 - > 100 Case 1 or Case
offloading latency for Case 4 can be reduced by at least more than Driving (Lane 1000 m il Mbps 2 + Optimistic
12 seconds for Fog 1 and more than 6 seconds for the simulated change warning, liseconds authentication
link. Offloading latency for Case 3 is also slightly reduced. Notably, lane merge)
the difference between the offloading time for Case 4 and Case 3 is
relatively shorter compared to the Debian Base Image. Offloading Cooperative Safety Few 10 mil > 1000 Case 1 + Re
time for Cases 1 and 2 can also be reduced by a few milliseconds (Neighbor collision liseconds Mbps sources Reser
(e.g., in Case 1 for the memcached image, around 27 milliseconds warning, obstacle vation
detection, vulnera
for Fog node 1 and 5 milliseconds for the simulated link).
ble pedestrian pro
tection)
C. Latency o f Offloading without Authentication and without
Reservation (Study 2) Cooperative Few > 1 0 Mbps Case 1 or Case
Figure 5 (c)-(d) shows the significant time overhead of the Perception (see- seconds 2 or Case 3 or
authentication process from the user device to the Fog node on the through, life-seat, Case 4
offloading time for Cases 1-4. For example, in Case 1, offloading time satellite view)
decreases by about 478 milliseconds for the simulated link and more
than one second for Ethernet network/Fog 1. This difference is due to Autonomous Driv Few 10 mil > 1000 Case 1 + Re
the performance of the Fog nodes and the network latency between ing (Self driving in liseconds Mbps sources Reser
VM1 and Fog 1. This convincingly demonstrates an urgent need the city) vation
to develop a proactive authentication technique to further optimize
time-sensitive offloading in VFC. Autonomous Few > 1 0 Mbps Case 1 or Case
Navigation (High- seconds 2 or Case 3 or
definition local Case 4
D. Latency o f Offloading Assuming a Timely Resource Reser
map acquisition)
vation without Overloaded Situation (Study 3)
Figure ?? shows the impact of the BL on the overall prefetching
time. A very noticeable point is the time difference between the
Debian and Alpine base images. In Step 2, the Alpine image needs show that the prefetching of the Busybox DIL across all Fog nodes
about 3 seconds to be transferred from the Fog node local file system in the cluster takes more than 6 seconds. The Memcached and Redis
to the Pod container, whereas the Debian image takes more than 6 Alpine DIL consume slightly more time, and the synchronization of
seconds. Step 3 requires about 5 seconds longer to transfer the Debian the Python image to all Fog nodes takes the longest. In addition,
image than Step 4. In addition, the distribution of the Alpine base the measurements in Figure 6 illustrate that using the Alpine base
image to all Fog workers within the cluster needs about 7.5 seconds, image can significantly decrease the start time for prefetching of DIL
and about 15 seconds for Debian Base image. We now evaluate the compared to a Debian base image.
prefetching time of DIL for both base images, Alpine in Figure 7 Accordingly, the minimum required start time for prefetching
and Debian in Figure 6. Steps 2 and 3 are the most time-consuming depends mainly on the size of the BL and DIL. For example, the
phases in both figures, which can also be explained by the performance Fog cluster requires 11.5 seconds to start prefetching the Redis image
differences between the Fog nodes. Our measurements in Figure 7 with the Alpine base image. For the Debian base image, the Fog
Authorized licensed use limited to: Hochschule Landshut. Downloaded on December 02,2024 at 09:23:02 UTC from IEEE Xplore. Restrictions apply.
180
2024 9th International Conference on Fog and Mobile Edge Computing (FMEC)
Case4
7,078 § Case3
g Case3
|l Case2 ^m =m 2,304 £ Case2 i « 2,905
3 Case2 1,5* § Case3
■ 0,841 C asel ' 1,588 cl Casel ■ 0,363 3 Case2
6,495 Case4
■2 Case3 5,437 42 Case3 Case3
1 Case2 ^ 12,001 Case2 i • 2,829 Case3
*- Case2 1,28- Case2
j, Casel ■ 0,737 C asel ' 1,434 Casel • 0,259
1 &
§ -o
Case4
ij "5
(S a
js
Case4
Case3
Case2 ^
h
■ 1,928
4,276
3,211
8 3
Q g
1
Case3
Case2 i i 2,52
si Case4
Case3
Case2 • 1,211
3,081
2,494 II Case3
Case2 " 1,456
Casel i 0,231
5
Case4
m 3,574 x Case4 ; Case3 2,8!
2 Case3 i;
J Case3 M ■ 2,51 1
■ ' 2,464
6,091
I* Case2 " 1,08
Case2 ■ 1,4
Casel i 0,203
|* Case2 M l 1,801 ■° Casel i 0,121
1 ' 1,267
■° Casel m 0,599
8 12 16 20 24 28 32
0 4 8 12 16 20 24 28 32 36 Offloading time [seconds]
Offloading time [seconds]
Offloading time [seconds] Offloading time [seconds] ■ Userdevice ■Network ■Fog
■ User device ■ Network ■ Fog
levice Authentication ■Network "Fog ■ User device Authentication ■ Network nFog
(d) Ethernet, Fog 1
(a) Simulated Link (b) Ethernet, Fog 1 (c) Simulated Link
cluster should start prefetching the Redis DIL and the Debian BL the minimum prefetching time may reduce offloading to Case 1 or
22.6 seconds before the resources are needed. 2. The selection of the relevant Base image also plays an important
role. Accordingly, choosing a smaller base image positively impacts
offloading time and the chances of meeting the application deadline.
python 3 Step-4-DI Moreover, optimizing the authentication among user devices and
Fog nodes further improves the overall offloading time. Next, we
benchmark the performance of three measured offloading scenarios
redis
with the requirements of popular time-sensitive vehicular applications.
Step-3-DI
E. Benchmarking Vehicular Applications Concerning Timeli
ij memcached ness Requirements
o ■Step-2-DI
Q In the following, we rely on the work [4], where VFC applications
have been surveyed, and their requirements concerning end-to-end
busybox latency have been discussed in detail.
As illustrated in Table I, our proposed layered framework approach
3 6 9 12 15 can support some vehicular applications, such as high-definition local
map acquisition, see-through, etc., that tolerate only a few seconds
Offloading time [seconds]
concerning offloading latency. Other applications, such as lane change
Fig. 6: Offloading Time for Prefetching DIL (Debian Base warnings, must proactively authenticate to the Fog nodes to offload
Docker containers with at least 100 Mbps bandwidth to meet the
Image)) deadline. However, some other applications, like autonomous driving
and obstacle detection, require only code offloading with 1000 Mbps
bandwidth to cope with their strict timeliness needs. 5G, with its up
to 1000 Mbps and very short network latency [3], seems to be very
i Step-4-DI
python
convenient to several time-sensitive VFC applications.
Overall, the experimental study convincingly demonstrates that there
Step-3-DI is an urgent need for techniques that reserve resources and prepare in
redis
advance the required layers (Case 2, Case 3, and Case 4) to eventually
reach a microservice/FaaS like computational architecture for time-
i Step-2-DI critical Fog applications. In addition, ensuring a proactive/optimistic
authentication between user devices and Fog nodes is an urgent need
memcached
o for time-sensitive Fog applications.
Q
V. C o n c l u s io n
busybox
Time-critical offloading is a promising Fog computing use case.
In this paper, we proposed a set of experimental studies to better
0 3 6 9 investigate the offloading latency. Based on four offloading cases
Offloading time [seconds] according to the layered Docker container architecture, our experi
ments confirmed the effectiveness of a layered strategy in terns of
Fig. 7: Offloading Time for Prefetching DIL (Alpine Base
overall service offloading time. In particular, for Cases 1 and 2, a
Image) container can be offloaded from the application initiator to a Fog
node within a period of 0.121 to 1.5 sec, providing an optimistic
Our experimental study confirms the suitability of the layered authentication. These achievements confirm the suitability of our fine
framework and prefetching approach in VFC. In particular, reserving grained layered approach with minimal message rounds to address
resources in advance and proactively arranging the DIL and BL before the strict requirements of time-critical applications. In addition, we
Authorized licensed use limited to: Hochschule Landshut. Downloaded on December 02,2024 at 09:23:02 UTC from IEEE Xplore. Restrictions apply.
181
2024 9th International Conference on Fog and Mobile Edge Computing (FMEC)
showed through our analysis that the minimum prefetching time is [10] P. Karhula, J. Janak, and H. Schulzrinne. Checkpointing and Migration
at least 6 seconds, and the Fog nodes need to arrange the DIL and o f IoT Edge Functions. In Proceedings o f the 2nd ACM International
BL inside the Kubernetes cluster in advance. Therefore, we indirectly Workshop on Edge Systems, Analytics and Networking (EdgeSys), 2019.
increased the availability of Fog nodes as well as optimized the overall [11] S. Wang, Y. Guo, N. Zhang, and et al. Delay-aware Microservice
Coordination in Mobile Edge Computing: A Reinforcement Learning
offloading time.
Approach. In IEEE Transactions on Mobile Computing, 2019.
In the future, we plan to conduct similar measurements on different [12] Kubernetes. Main Page — Kubernetes. [Online]. Available:
communication links. In particular, 5G and beyond links will be https://kubernetes.io, November 2024.
considered. Furthermore, we aim to develop resource reservation [13] M icrok8s. Main Page — Microk8s. [Online]. Available:
techniques for time-sensitive Fog Computing. https://microk8s.io/, October 2021.
[14] M. Luksa. Kubernetes in Action. M anning Publications, ISBN:
R eferences 9781617293726, 2018.
[15] A. Chebaane, S. Spornraft, and A. Khelil. Container-based Task
[1] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli. Fog Computing and its
Offloading for Time-Critical Fog Computing. In IEEE 3rd 5G World
Role in the Internet o f Things. In Proceedings of the first MCC workshop
Forum (5GWF), 2020.
on Mobile cloud computing (MCC), 2012.
[16] D. Silva, G. Asaamoning, H. Orrillo, and et al. An Analysis o f Fog
[2] A. Khelil and D. Soldani. On the Suitability o f Device-to-Device
Computing Data Placement Algorithms. In Proceedings of the 16th EA1
Communications fo r Road Traffic Safety. In Proceedings of the IEEE
International Conference on Mobile and Ubiquitous Systems: Computing,
World Forum on Internet of Things (WF-IoT), 2014.
Networking and Services, 2019.
[3] A. E. Fernandez and et al. 5GCAR Scenarios, Use Cases, Requirements
[17] N. Verba, K. M. Chao, A. James, and et al. Graph Analysis o f Fog
and KPIs. https://5gcar.eu/wp-content/uploads/2017/05/5GCAR_D2.l_
Computing Systems for Industry 4.0. In Proceedings of 2017 IEEE 14th
vl.O.pdf, 2017.
International Conference on e-Business Engineering (ICEBE), 2017.
[4] A. Chebaane, A. Khelil, and N. Suri. Time-Critical Fog Computing for
[18] D. Callegaro, S. Baidya, and M. Levorato. A Measurement Study on
Vehicular Networks. John Wiley & Sons, Fog Computing: Theory and
Edge Computing fo r Autonomous UAVs. In Proceedings of the ACM
Practice, ISBN: 978-1-119-55169-0, 2020.
SIGCOM M 2019 Workshop on M obile AirGround Edge Computing,
[5] X. Ge, Z. Li, and S. Li. 5G Software Defined Vehicular Networks. In
Systems, Networks, and Applications (MAGESys). Association for
IEEE Communications Magazine, vol. 55, no. 7, 2017.
Computing Machinery, 2019.
[6] Marco G., Michele P., Marco M., and et al. Toward 6G Networks: Use
[19] A. R. Hameed, S. ul Islam, and I. Ahmad. Energy- and performance-
Cases and Technologies. IEEE Communications Magazine, vol. 58, no.
aware load-balancing in vehicular fog computing. Sustainable Computing:
3, 2020.
Informatics and Systems, vol. 30, 2021.
[7] Hrvoje R., Ivana Nizetic K., Toni P., and Mario C. Towards reliable
[20] J. Yoon, J. Li, and S. Shin. A Measurement Study on Evaluating Container
IoT: Testing LoRa communication. 2018 26th International Conference
Network Performance fo r Edge Computing. In Proceedings of 2020
on Software, Telecommunications and Computer Networks (SoftCOM),
21st Asia-Pacific Network Operations and M anagem ent Symposium
2018.
(APNOMS), 2020.
[8] CRIU. [n. d.]. Checkpoint/Restore Functionality. Main Page — CRIU. [21] Y. Xiao, M. Krunz, H. Volos, and T. Bando. Driving in the Fog: Latency
[Online]. Available: https://criu.org/Main_Page, November 2024. Measurement, Modeling, and Optimization o f LTE-based Fog Computing
[9] C. Roig, A. Ripoll, and F. Guirado. A New Task Graph Model for Mapping fo r Smart Vehicles. 2019 16th Annual IEEE International Conference
Message Passing Applications. In IEEE Transactions on Parallel and on Sensing, Communication, and Networking (SECON), 2019.
Distributed Systems, vol. 18, no. 12, 2007. [22] Y. Xiao and M . Krunz. Adaptive Fog: A Modelling and Optimization
Framework fo r Fog Computing in Intelligent Transportation Systems.
IEEE Transactions on Mobile Computing, 2021.
Authorized licensed use limited to: Hochschule Landshut. Downloaded on December 02,2024 at 09:23:02 UTC from IEEE Xplore. Restrictions apply.
182