Improving the performance of the fuzzy-internet of things joint system by using an efficient service deployment algorithm
Improving the performance of the fuzzy-internet of things joint system by using an efficient service deployment algorithm
Corresponding Author:
Ali Ahmed Razzaq
Department of Computer Science and Systems Engineering, College of Engineering, Andhra University
Visakhapatnam 530003, India
Email: [email protected]
1. INTRODUCTION
During the coming years, billions of things (such as home appliances, irrigation outlets, and lighting
poles) will be connected to the internet, and this will lead to a real revolution that affects the amount of data
that is collected and shared. Of course, what is presented is known as the internet of things (IoT). The term
cloud computing refers to a service provided by data centers, and it plays an important role in processing
application services generated by the IoT. The load on cloud servers increases with the increase in the number
of IoT devices (network congestion occurs), and given that these servers are usually located spatially far from
the IoT network, this will lead to a noticeable increase in transmission time and a decrease in the performance
of time-sensitive applications. Other related work can found in [1]–[6]. Fog computing is a new technology for
decentralized (distributed) computing, which aims to improve efficiency by trying to reduce the amount of data
that must be transferred to the cloud for processing, analysis, and storage. In [7], [8] authors see that through
Fog computing, application services are dealt with at the edge of the network, as it is the intermediary between
the cloud and the end user, so its spatial proximity to the IoT network reduces network congestion.
Two approaches are followed in the context of implementing IoT application services. In the first ap-
proach, an IoT device processes its own tasks locally or chooses to offload them computationally to the nearest
fog node or to the cloud, where it is considered to localize processing and maximize the total number of tasks
served locally in the IoT devices and within the fog environment while meeting both scheduling requirements
final and energy constraints. This is achieved according to different strategies, Tang et al. [9] considers that
some of them rely on the battery power level of the IoT device and the availability of fog nodes (availability
depends on the length of the task queue), to determine the processing location. While Zhu et al. [10] introduced
an idea that relies on having two criteria for making offloading decisions, namely task execution time and the
energy consumed during that process, in the cases of local and fog execution, and as a result of the comparison,
the location of processing the task/service is determined.
There is another approach that excludes IoT devices from processing operations, where fog and cloud
resources are exploited to accomplish end-user tasks so that all computational tasks of an IoT device are of-
floaded to fog nodes, to share the incoming data for processing, or may send part of this load goes to the cloud.
Gupta et al. [11] propose a strategy to service placement in fog nodes close to IoT users. Considering that
these nodes ultimately have limited resources and cannot handle all requests, the proposed strategy resorts to
placement the remaining services on cloud devices in data centers. Other similar work can be found [12], [13].
One of the most promising solutions aimed at improving the utilization of available resources in a
foggy environment studied by [14], [15]. Where they used the offloading of computational tasks (migration
of virtual machines or containers), in other words, sharing the load between fog nodes, which reflects posi-
tively on the quality of service (QoS) provided to IoT applications by reducing delay. However, according to
Gao et al. [16], the disadvantages of this improvement is increase in the energy consumption of the fog system.
Based on this gap, several policies have been proposed to deploy IoT application services in the fog-cloud
system, one of them [17] depend on the type of service, so that the goal is either to reduce the response time if
the service is critical, or to try to reduce the fog system’s use of the energy resource, if It was normal. While
Alenizi and Rana [18] used dynamic task scheduling (DTS), with the aim of reducing the overall delay of criti-
cal (delay-sensitive) tasks, in addition to dynamic energy control (DEC) in order to reduce energy consumption
in the fog layer while maximizing the use of resource-limited fog nodes. Additionally, Kayal [19] relied on
publishing services without any central coordination, as their design was based on the Markov approximation
method, seeking to reduce both fuzzy energy consumption and the communication cost of applications. Energy
saving is an important area of research not only for fog computing, but also in cloud computing, where search-
ing for idle data centers and putting them into energy saving mode or shutting them down has a significant
impact on system energy consumption. Shutdown requires the migration of virtual machines to active centers
that are not sufficiently loaded, and it is also possible to activate a closed center in order to ensure that the QoS
requirements of the system. Other related studies on deploy IoT application services in the fog-cloud system
can found in [20]–[24].
Based on the provided information, the research proposes the QoS-fog algorithm, which aims to im-
prove QoS in the fog environment. This algorithm seeks to deliver enhanced performance when there are
requests for high-priority services, especially those sensitive to latency. It manages the network load directed
towards fog nodes, thereby reducing energy consumption in the fog environment.
This paper is structured as follows: the proposed algorithm presented in section 2. In section 3,
discussion and performance evaluation. Finally, section 4 concludes our paper.
Improving the performance of the fuzzy-internet of things joint system by using an ... (Ali Ahmed Razzaq)
174 ❒ ISSN: 2252-8938
Therefore, the places where services are hosted are one of three: the first level of the fog pyramid (high-
priority services), the second level (low-priority services), and the cloud data center (of course, according
to the proposed algorithm, low-priority services may be hosted by nodes of the first level of the hierarchy,
Only if there are idle nodes, they enter energy saving mode, due to the decrease in computational load, in
light of the congestion of the lower level nodes of course, in an attempt to avoid sending the mentioned
service to the cloud).
− Cloud environment: End-user requests that the fog environment was unable to process are sent to the
cloud servers, each of which has its own resources of processing power, storage space, and bandwidth.
2.2. The mathematical models used in the proposed algorithm
Our goal under this architecture is to achieve the QoS required for high-priority (time-sensitive) ser-
vices, and manage energy at the fog level at the same time. Accordingly, the most prominent mathematical
relationships upon which our work is based to achieve its goal are the following:
a) The energy consumption model adopted in the algorithm is based on the CPU utilization, modeled as
a linear function of this utilization. Since the deployment/allocation of services excludes IoT devices,
where there is no local processing, processing occurs at the fog or cloud data center level. Therefore, the
total energy consumption of the system will depend on the sum of two factors: the energy consumption
of fog nodes while executing the services allocated to them, plus the cloud energy consumption, given
by (1):
m
X k
X
Etotal = Ei + EL (1)
i=1 p=1
The energy consumption of the fog nodes while executing the services assigned to them is represented
according to the (2) [25], [26]:
n
X ssize
j
Ei = P (u)i × xji · (2)
j=1
sCP
j
U
P (u)i = F nmin
i + (F nmax
i − F nmin
i ) · ui (3)
The symbol Tg,RM denotes the delay between the fog gateway and the resource manager in the
fog system, while the symbols TRM,F NL1 , TRM,F NL2 , and TCM,P M express the delay between
the fog resource manager and a service implementing node in the first and second hierarchical
levels, and between the cloud resource manager and an implementing node in the data center,
respectively. The waiting times experienced by service requests, including delays in processing
and queuing, when executing occurs in first fog level, second fog level, in addition to cloud, are
represented by the symbols T(W +P RO)L1 ), T(W +P RO)L2 ), and T(W +P RO)P M ). As for the symbols
x(j,F N L2) , x(j,F N L1) , and x(j,cloud) , they represent logical variables that indicate the execution or
non-execution of the service requested by the IoT device in the second level fog node, first level
fog node, and in the cloud at straight.
– The length of the queue in which tasks are lined up for processing, is a factor that has a significant
impact on determining where to host the high-priority service Sh within the second-level fog nodes,
and this is what happens when the first level is congested. Be careful not to move the high-priority
service to the cloud for processing, if there are fog nodes dedicated to the low-priority services Sl
that are able to meet the requirements of the high-priority service in terms of resources and time
constraints (deadline), or even in the case of time or resource unavailability, there may be a forced
migration of low priority services to the cloud for processing (conditional forced migration). From
the above, we find that the queues of the second level fog nodes handle both types of services, and
therefore the queue length is given according to (5) [2]:
m
X k
X
TQ = Thprocess + Tlprocess (5)
h=1 l=1
The symbols Thprocess and Tlprocess represent the expected execution time for high- and low-priority
services, respectively. Of course, the queue waiting time must not exceed a specific threshold, in
order for the fog node at the second level to be considered acceptable in terms of the possibility of
hosting a high-priority service for execution.
Improving the performance of the fuzzy-internet of things joint system by using an ... (Ali Ahmed Razzaq)
176 ❒ ISSN: 2252-8938
the requirements for executing the high-priority service. If a suitable node is found, the algorithm will, before
finally hosting the service, verify the processing capacity of the selected node against a threshold (this condition
is considered an additional safety factor for the readiness of the node, which essentially meets the requirements,
given that resource updates are periodic, and during the period between updates, migrations may occur between
fog nodes). The high-priority service is executed if the condition is met; otherwise, the fog node is responsible
for selecting a node from the same level (closest to the manager) to execute the service, driven by the necessity
of maintaining the QoS.
The node then sends a hosting message to its neighbors. If it receives multiple responses, the crowded
node, so to speak, selects the nearest node for hosting. However, if it does not receive a response, the crowded
node sends feedback to the manager, who will schedule the incoming service to the nodes of the level furthest
from the center in the hierarchical structure (second level). Here, the selection of a second-level node depends
on the length of the waiting queue in it, as the time factor becomes very important, given that the service is
sensitive to delay and its deadline may not be exceeded (intuitively, the nodes chosen for hosting by the manager
meet the service requirements, but the trade-off between nodes depend on the time length of the waiting queue).
If no server nodes are available, a low-priority service (or services) will be migrated to cloud data centers for
processing, and the high-priority service will be hosted in its place for processing locally (fog).
Additionally, the low-priority services, as we mentioned, are hosted directly by the second-level nodes
in the hierarchical structure, and are migrated for cloud processing in two cases, either because of a high-priority
service that the fog first-level nodes did not meet its requirements, or because of the congestion of the second-
level nodes. It may seem unfair to low-priority services, but our proposal, in the context of its attempt to manage
the energy resource efficiently and put nodes into energy saving mode when the network load decreases, takes
into account the congestion of the second fog level (that is, if there is a need to migrate a low-priority service
to the cloud due to the density of fog-processed services, and if there is a level 1 fog node in energy saving
mode (i.e. it has no services to execute, then the low-priority service is scheduled to that node for execution).
Figure 1 show the flowchart of the proposed algorithm.
The evaluation process of the aforementioned algorithms took place by measuring two metrics:
– Total energy consumption: This metric is used to express the amount of energy consumption within
the fog-cloud system resulting from the execution of services required by IoT devices. This metric was
calculated under the influence of three parameters: the number of services, the number of fog nodes, and
the percentage of high priority services to the total number of services.
– Average response time: This metric expresses the average time elapsed between sending a service request
by IoT devices and receiving the response. Likewise, this metric was studied under the influence of two
parameters: the number of services and the number of fog nodes.
Several parameters were configured to complete the simulation process, which was completed within a time
of 500 ms for each scenario studied, and a threshold was set for the length of the waiting frame equal to
50 ms. Table 1 shows the resources of fog nodes and cloud servers (taking into account the heterogeneous nature
of fog nodes - i.e. variability in resources - in addition to the characteristics of services, whether high or low
priority, in terms of processing capacity and memory size. While variable simulation parameters across studied
scenarios related to the service, such as size and deadline. Each one follows a random uniform distribution
(RUD) as we illustrated in Table 2.
Improving the performance of the fuzzy-internet of things joint system by using an ... (Ali Ahmed Razzaq)
178 ❒ ISSN: 2252-8938
Figure 4 shows the effect of increasing the number of fog nodes on the total system energy consump-
tion. Again, the proposed algorithm is able to achieve lower energy consumption compared to edge-ward and
cloud only. We note that as the number increases, the fixed number of services, which is equal to 100 (in all the
scenarios studied and expressed in the figure), are hosted on appropriate nodes in terms of resources and the
ability to achieve the deadlines for the services. While the rest of the fog nodes enter the energy saving mode,
knowing that increasing the number will reduce the probability of sending the required services to the cloud
data center for processing, and this plays an important role in the noticeable energy decrease witnessed by the
service placement algorithms under study (edge-ward, QoS-fog).
Figure 5 expresses the effect of the ratio of high-priority services to the total number of services re-
quired by the system to implement/ execute on the energy consumption metric when applying the three place-
ment algorithms (strategies). As a result of its deployment/placement of high-priority services within the nodes
of the first fog level, which has spatial proximity to the fog resource manager and high processing capabilities
compared to the second level of nodes, the proposed algorithm was able to reduce the energy consumed, As a
result of the reduced time required to implement/execute services, whether in terms of processing or network
transmissions. Naturally, the results show that the higher the ratio, the lower the energy at the system level, and
this is what we see with the three algorithms, and this behavior is due to the simulation settings that considered
the sizes of high-priority services to be lower than low-priority services [14].
Figure 4. Total energy consumption as a function of increasing the number of fog nodes
Figure 5. Total energy consumption as a function of the ratio of the number of high-priority services to the
total number of services
Improving the performance of the fuzzy-internet of things joint system by using an ... (Ali Ahmed Razzaq)
180 ❒ ISSN: 2252-8938
Increasing the number of fog nodes has a positive effect on the average response time (a noticeable
decrease), when using placement algorithms that rely on fog as part of its architecture, i.e. QoS-fog and
Edge-ward, with our algorithm superior in terms of performance, and this is what Figure 7 shows. While the
number of fog nodes is not a parameter that has an impact on the cloud placement algorithm, as the processing
takes place in data centers. The reason why the proposal achieves a shorter time is due to its exploitation
of its hierarchical architecture, and the attempt to implement high-priority (time-sensitive) services in a fog
subsystem by performing migrations of time-tolerant second-level services to the cloud (if necessary) in order
to maintain the QoS represented by response time, which meets deadlines for those services.
Figure 7. Average response time as a function of increasing the number of fog nodes
4. CONCLUSION
The proposed algorithm provided better performance than the edge-ward and cloud-only strategies,
as we note its ability to achieve efficiency in system energy use under the influence of several parameters: the
number of services, the number of fog nodes, and the percentage of high-priority services required by the IoT
user, as a result of node feedback. The fog node is sent its load to the resource manager, which is used to manage
power usage. In addition, the proposed algorithm was able to reduce the average response time for services,
which means its ability to raise the level of service provided. In the future work, we will seek to evaluate the
performance of the QoS-fog algorithm with other algorithms, dealing with different service priorities.
REFERENCES
[1] F. Alenizi and O. Rana, “Minimising delay and energy in online dynamic fog systems,” Computer Science & Information Technol-
ogy, vol. 14, pp. 139-158, 2020.
[2] H. R. Arkian, A. Diyanat, and A. Pourkhalili, “MIST: Fog-based data analytics scheme with cost-efficient resource provisioning for
IoT crowdsensing applications,” Journal of Network and Computer Applications, vol. 82, pp. 152–165, 2017.
[3] S. Pallewatta, V. Kostakos, and R. Buyya, “Placement of microservices-based IoT applications in fog computing: a taxonomy and
future directions,” ACM Computing Surveys, vol. 55, no. 14, pp. 1–43, 2023, doi: 10.1145/3592598.
[4] R. Mahmud, R. Kotagiri, and R. Buyya, “Fog computing: a taxonomy, survey, and future directions,” in Internet of Everything,
Singapore: Springer, 2018, pp. 1-15, doi: 10.1007/978-981-10-5861-5 5
[5] P. Hu, S. Dhelim, H. Ning, and T. Qiu, “Survey on fog computing: Architecture, key technologies, applications and open issues,”
Journal of Network and Computer Applications, vol. 98, pp. 27–42, 2017.
[6] C. Canali, C. Gazzotti, R. Lancellotti, and F. Schena, “Placement of IoT microservices in fog computing systems: A comparison of
heuristics,” Algorithms, vol. 16, no. 9, 2023, doi: 10.3390/a16090441.
[7] R. Vilalta et al., “Improving security in internet of things with software defined networking,” in 2016 IEEE Global Communications
Conference (GLOBECOM), USA, 2016, pp. 1-6, doi: 10.1109/GLOCOM.2016.7841889.
[8] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its role in the internet of things,” in Proceedings of the first
edition of the MCC workshop on Mobile cloud computing, pp. 13-16, doi: 10.1145/2342509.2342513.
[9] Q. Tang, R. Xie, F. R. Yu, T. Huang, and Y. Liu, “Decentralized computation offloading in IoT fog computing system with en-
ergy harvesting: A Dec-POMDP approach,” in IEEE Internet of Things Journal, vol. 7, no. 6, pp. 4898-4911, June 2020, doi:
10.1109/JIOT.2020.2971323.
[10] Q. Zhu, B. Si, F. Yang, and Y. Ma, “Task offloading decision in fog computing system,” in China Communications, vol. 14, no. 11,
pp. 59-68, 2017, doi: 10.1109/CC.2017.8233651.
[11] H. Gupta, A. V. Dastjerdi, S. K. Ghosh, and R. Buyya, “iFogSim: A toolkit for modeling and simulation of resource management
techniques in the Internet of Things, Edge and Fog computing environments,” Software: Practice and Experience, vol. 47, no. 9,
pp. 1275-1296, 2017, doi: 10.1002/spe.2509.
[12] Y. Xiao and M. Krunz, “QoE and power efficiency tradeoff for fog computing networks with fog node cooperation,” IEEE INFO-
COM 2017 - IEEE Conference on Computer Communications, USA, 2017, pp. 1-9, doi: 10.1109/INFOCOM.2017.8057196.
[13] B. Jamil et al., “A job scheduling algorithm for delay and performance optimization in fog computing,” Concurrency and Compu-
tation: Practice and Experience, vol. 32, no. 7, 2019, doi: 10.1002/cpe.5581.
[14] M. Chen and Y. Hao, “Task offloading for mobile edge computing in software defined ultra-dense network,” in IEEE Journal on
Selected Areas in Communications, vol. 36, no. 3, pp. 587-597, 2018, doi: 10.1109/JSAC.2018.2815360.
[15] M. Al-Khafajiy, T. Baker, H. Al-Libawy, Z. Maamar, M. Aloqaily, and Y. Jararweh, “Improving fog computing performance via
Fog-2-Fog collaboration “ Future Generation Computer Systems, vol. 100, pp. 266-280, 2019, doi: 10.1016/j.future.2019.05.015.
[16] X. Gao, X. Huang, S. Bian, Z. Shao, and Y. Yang, “PORA: Predictive offloading and resource allocation in dynamic fog computing
systems,” in IEEE Internet of Things Journal, vol. 7, no. 1, pp. 72-87, 2020, doi: 10.1109/JIOT.2019.2945066.
[17] H. H. O. Hassan, S. Azizi, and M. Shojafar, “Priority, network and energy-aware placement of IoT-based application services in
fog-cloud environments,” IET Communications, vol. 14, pp. 2117-2129, 2020, doi: 10.1049/iet-com.2020.0007.
[18] F. Alenizi and O. Rana, “Dynamically controlling offloading thresholds in fog systems,” Sensors, vol. 21, no. 7, 2021, doi:
10.3390/s21072512.
[19] P. Kayal, “Autonomic IoT application placement in edge/fog computing,” Journal of Mathematical Techniques and Computational
Mathematics, vol. 3, no. 1, pp. 1-8, 2024.
[20] A. Yousefpour et al., “FOGPLAN: A lightweight QoS-Aware dynamic fog service provisioning framework,” in IEEE Internet of
Things Journal, vol. 6, no. 3, pp. 5080-5096, 2019, doi: 10.1109/JIOT.2019.2896311.
[21] M. A. H. Monil, R. Qasim, and R. M. Rahman, “Energy-aware VM consolidation approach using combination of heuristics and
migration control,” Ninth International Conference on Digital Information Management (ICDIM 2014), Phitsanulok, 2014, pp.
74-79, doi: 10.1109/ICDIM.2014.6991413.
[22] A. Mosa and N. W. Paton, “Optimizing virtual machine placement for energy and SLA in clouds using utility functions,” Journal of
Cloud Computing, vol. 5, 2016, doi: 10.1186/s13677-016-0067-7.
[23] M. A. H. Monil and R. M. Rahman, “Implementation of modified overload detection technique with VM selection strategies based
on heuristics and migration control,” 2015 IEEE/ACIS 14th International Conference on Computer and Information Science (ICIS),
USA, 2015, pp. 223-227, doi: 10.1109/ICIS.2015.7166597.
[24] M. Monil and R. M. Rahman, “VM consolidation approach based on heuristics, fuzzy logic, and migration control,” Journal of
Cloud Computing, vol. 5, 2016, doi: 10.1186/s13677-016-0059-7.
[25] Y. C. Lee and A. Y. Zomaya, “Energy efficient utilization of resources in cloud computing systems,” The Journal Supercomputing,
vol. 60, pp. 268–280, 2012, doi: 10.1007/s11227-010-0421-3.
[26] A. Beloglazov, J. Abawajy, and R. Buyya, “Energy-aware resource allocation heuristics for efficient management of data centers for
cloud computing,” Future Generation Computer Systems, vol. 28, no. 5, pp. 755-768, 2012, doi: 10.1016/j.future.2011.04.017.
[27] C. Puliafito et al., “MobFogSim: Simulation of mobility and migration for fog computing,” Simulation Modelling Practice and
Theory, vol. 101, 2020, doi: 10.1016/j.simpat.2019.102062.
[28] M. M. Lopes, W. A. Higashino, M. A. M. Capretz, and L. F. Bittencourt, “MyiFogSim: A simulator for virtual machine migration
in fog computing,” in Companion Proceedings of the 10th International Conference on Utility and Cloud Computing (UCC ’17
Companion), New York, USA, 2017, pp. 47-52, doi: 10.1145/3147234.3148101.
[29] D. Gonçalves, C. Puliafito, E. Mingozzi, O. Rana, L. Bittencourt, and E. Madeira, “Dynamic network slicing in fog computing
for mobile users in MobFogSim,” in 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC), UK,
2020, pp. 237-246, doi: 10.1109/UCC48980.2020.00042.
BIOGRAPHIES OF AUTHORS
Ali Ahmed Razzaq had a master’s degree in computer network engineering from Andhra
University. Now he is a research Scholar at Andhra University in the IoT specialty for the purpose
of obtaining a Ph.D. He has several skills in the field of artificial intelligence and its programming
in the field of networks and design websites by framework django in python. He currently work
in the field of air navigation and its systems as a data entry for aviation transit (FDO) in the Iraqi
Air Traffic Management Center in the Area Control Center (ACC). Now an air traffic controller at
Baghdad International Airport (Iraq). He can be contacted at email: [email protected].
Improving the performance of the fuzzy-internet of things joint system by using an ... (Ali Ahmed Razzaq)