0% found this document useful (0 votes)
12 views

Distributed Intelligence in Edge Computing - PHD - Thesis

Uploaded by

kavadirajesh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Distributed Intelligence in Edge Computing - PHD - Thesis

Uploaded by

kavadirajesh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

DSV Report Series No.

23-008

Distributed Intelligence for IoT


Systems Using Edge Computing
Ramin Firouzi

Doctoral Thesis in Computer and Systems Sciences at Stockholm University, Sweden 2023
Distributed Intelligence for IoT Systems Using
Edge Computing
Ramin Firouzi
Academic dissertation for the Degree of Doctor of Philosophy in Computer and Systems
Sciences at Stockholm University to be publicly defended on Friday 13 October 2023 at 13.00 in
Lilla hörsalen, Borgarfjordsgatan 12.

Abstract
Over the past decade, the Internet of Things (IoT) has undergone a paradigm shift away from centralized cloud computing
to edge computing. Hundreds of billions of things are estimated to be deployed in the rapidly advancing IoT paradigm,
resulting in an enormous amount of data. Sending all the data to the cloud has recently proven to be a performance
bottleneck, as it causes many network issues, including high latency, high power consumption, security issues, privacy
issues, etc. However, the existing paradigms do not use edge devices for decision-making. Distributed intelligence could
strengthen the IoT in several ways by distributing decision-making tasks among edge devices within the network instead
of sending all data to a central server. All computational tasks and data are shared among edge devices. Edge computing
offers many advantages, including distributed processing, low latency, fault tolerance, better scalability, better security, and
data protection. These advantages are helpful for critical applications that require higher reliability, real-time processing,
mobility support, and context awareness. This thesis investigated the application of different types of intelligence (e.g.,
rule-based, machine learning, etc.) to implementing distributed intelligence at the edge of the network and the network
challenges that arise. The first part of this thesis presents a novel and generalizable distributed intelligence architecture
that leverages edge computing to enable the intelligence of things by utilizing information closer to IoT devices. The
architecture is comprised of two tiers, which address the heterogeneity and constraints of IoT devices. Additionally, the first
part of this thesis identifies a suitable reasoner for two-level distributed intelligence and an efficient way of applying it in
the architecture via an IoT gateway. To mitigate communication challenges in edge computing, the second part of the thesis
proposes two-level mechanisms by leveraging the benefits of software-defined networking (SDN) and 5G networks based
on open radio access network (O-RAN) as part of a communication overlay for the distributed intelligence architecture.
The third part of this thesis investigates integrating the two-tier architecture and the communication mechanisms in order
to provide distributed intelligence in IoT systems in an optimal manner.

Keywords: Internet of Things (IoT), Edge Computing, Distributed Intelligence, Software Defined Networking (SDN),
Federated Learning, 5G, O-RAN, Network Slicing, Reinforcement Learning.

Stockholm 2023
http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-220549

ISBN 978-91-8014-476-6
ISBN 978-91-8014-477-3
ISSN 1101-8526

Department of Computer and Systems Sciences


Stockholm University, 164 07 Kista
DISTRIBUTED INTELLIGENCE FOR IOT SYSTEMS USING EDGE
COMPUTING
Ramin Firouzi
Distributed Intelligence for IoT
Systems Using Edge Computing
Ramin Firouzi
©Ramin Firouzi, Stockholm University 2023

ISBN print 978-91-8014-476-6


ISBN PDF 978-91-8014-477-3
ISSN 1101-8526

Printed in Sweden by Universitetsservice US-AB, Stockholm 2023


To my beloved parents
Zahra and Mohammad,
To my sister and friend forever
Rezan,
and
To my love Midya
Abstract

Over the past decade, the Internet of Things (IoT) has undergone a paradigm
shift away from centralized cloud computing to edge computing. Hundreds of
billions of things are estimated to be deployed in the rapidly advancing IoT
paradigm, resulting in an enormous amount of data. Sending all the data to the
cloud has recently proven to be a performance bottleneck, as it causes many
network issues, including high latency, high power consumption, security is-
sues, privacy issues, etc. However, the existing paradigms do not use edge de-
vices for decision-making. Distributed intelligence could strengthen the IoT in
several ways by distributing decision-making tasks among edge devices within
the network instead of sending all data to a central server. All computational
tasks and data are shared among edge devices. Edge computing offers many
advantages, including distributed processing, low latency, fault tolerance, bet-
ter scalability, better security, and data protection. These advantages are help-
ful for critical applications that require higher reliability, real-time processing,
mobility support, and context awareness. This thesis investigated the applica-
tion of different types of intelligence (e.g., rule-based, machine learning, etc.)
to implementing distributed intelligence at the edge of the network and the
network challenges that arise. The first part of this thesis presents a novel and
generalizable distributed intelligence architecture that leverages edge comput-
ing to enable the intelligence of things by utilizing information closer to IoT
devices. The architecture is comprised of two tiers, which address the hetero-
geneity and constraints of IoT devices. Additionally, the first part of this thesis
identifies a suitable reasoner for two-level distributed intelligence and an ef-
ficient way of applying it in the architecture via an IoT gateway. To mitigate
communication challenges in edge computing, the second part of the thesis
proposes two-level mechanisms by leveraging the benefits of software-defined
networking (SDN) and 5G networks based on open radio access network (O-
RAN) as part of a communication overlay for the distributed intelligence ar-
chitecture. The third part of this thesis investigates integrating the two-tier ar-
chitecture and the communication mechanisms in order to provide distributed
intelligence in IoT systems in an optimal manner.
Sammanfattning

Under det senaste decenniet har sakernas internet (Internet of Things, IoT) ge-
nomgått ett paradigmskifte från centraliserad molnbaserad databehandling till
edge beräkning. Hundra miljarder enheter uppskattas ingå i det snabbt framåt-
skridande IoT-paradigmet, vilket resulterar i en enorm mängd data. Att skic-
ka all denna data till molnet har nyligen visat sig vara en flaskhals för pre-
standa eftersom det orsakar många nätverksproblem, inklusive fördröjning-
ar, energiförbruk- ning, säkerhet, integritet, etc. Befintliga paradigm använ-
der dock inte edge- enheter för beslutsfattande. Distribuerad intelligens kan
förstärka IoT på flera sätt genom att distribuera beslutsuppgifter bland edge-
enheter inom nätverket i stället för att skicka all data till en central server.
Alla beräkningsuppgifter och data delas bland edge-enheter. Edge beräkningar
erbjuder många fördelar, in- klusive distribuerad bearbetning, låg fördröjning-
ar, feltolerans, samt bättre skalbarhet, säkerhet och dataskydd. Dessa fördelar
är användbara för kritiska applikationer som kräver högre tillförlitlighet, re-
altidsprocesser, mobilitetsstöd och kontext- medvetenhet. Denna avhandling
undersökte tillämpningen av olika typer av in- telligens (till exempel regelba-
serad, maskininlärning, etc.) och nätverksutma- ningar för implementering av
distribuerad intelligens vid nätverkets kant(närmare till användare). Den första
delen av denna avhandling presenterar en ny och generaliserbar arki- tektur
för distribuerad intelligens som utnyttjar edge beräkningar för att möjlig- göra
sakernas intelligens genom att använda information närmare IoT-enheter. Ar-
kitekturen består av två lager, vilka adresserar heterogeniteten och begräns-
ningarna hos IoT-enheter. Dessutom identifierar avhandlingens första delen en
lämplig reasoner för den två-delar-distribuerade intelligensen och ett effektivt
sätt att tillämpa den i arkitekturen via en IoT-gateway. För att minska kommu-
nikationsutmaningarna i edge computing föreslår den andra delen av avhand-
lingen två-nivås mekanismer genom att utnyttja fördelarna med programvaru-
definierade nätverk (Sofware-defined networking, SDN) och O-RAN-baserade
5G-nätverk som en del av en överlagar kommunikation för den distribuerade
intelligensarkitekturen. Den tredje delen av avhandlingen undersökte integra-
tionen mellan två-lagers-arkitekturen och kommunikationsmekanismerna, för
att tillhandahålla distribuerad intelligens i IoT-system på ett optimalt sätt.
Acknowledgements

This dissertation signifies the zenith of an intensive five-year period spent im-
mersed in my doctoral studies. The aphorism holds true - no academic accom-
plishment is an isolated effort; instead, it’s a synergy of many. This dissertation
is a prime example of that. Thus, there are numerous individuals I would like
to express my gratitude to, as their constant support throughout my educational
journey deserves acknowledgment.
First and foremost, I would like to extend my profound gratitude to my pri-
mary supervisor, Professor Rahim Rahmani. His guidance, invaluable coun-
sel, and steadfast support have been instrumental in my academic journey. The
knowledge and expertise he shared have significantly enriched my understand-
ing and capabilities. I am sincerely grateful for his mentorship, which I firmly
believe has facilitated significant strides in my development as an indepen-
dent researcher and academic. I express profound gratitude to the Department
of Computer and Systems Sciences (DSV) for providing financial support for
my studies. I take immense pride in having called DSV my academic home
for the past five years. The department welcomed me with open arms and
furnished me with all the necessary resources to facilitate my research. The
invigorating work and research environment have made my days fruitful, re-
warding, and fulfilling. I have also had the privilege of working with fellow
PhD students and scholars within the department, specifically those within the
Systems Analysis and Security Unit (SAS). I extend my thanks for your unwa-
vering support, enlightening discussions related to my research and career, and
the occasional shared moments of humor. I am additionally indebted to the ad-
ministrative and technical staff at DSV, who have been of immense help with
matters big and small. Your assistance has been instrumental in my journey,
and for that, I am deeply grateful.
Lastly, but by no means least, I must convey my heartfelt appreciation to
my family. My dear parents, Zahra and Mohamad, deserve nothing less than
my highest level of gratitude for their unending love, support, and encourage-
ment. I am further fortunate to have a sibling, Rezan, whose love, care, and
support have been nothing short of a blessing. The knowledge that you are
always there for me instills a profound sense of comfort and serenity. Your
continual encouragement of my academic ambitions has proven to be invalu-
able. Without your unwavering support, I would not be in the position I am in
today.
In the same vein, I would like to extend my sincerest thanks to my wife,
Midya. Her patience and steadfast support throughout this journey have been
pivotal. Her understanding and encouragement have served as a constant source
of strength, making this challenging academic pursuit more bearable. Midya,
your faith in me has been a constant motivation, and for that, I remain eternally
grateful.

Stockholm, Sep. 2023

Ramin Firouzi
List of Papers

The following papers, referred to in the text by their Roman numerals, are
included in this thesis.
PAPER I: An Autonomic IoT Gateway for Smart Home Using Fuzzy Logic
Reasoner
Ramin Firouzi, Rahim Rahmani, and Theo Kanter
Procedia Comput. Sci., vol. 177, pp. 102–111, Jan. 2020,
DOI: 10.1016/j.procs.2020.10.017.
PAPER II: Distributed-Reasoning for Task Scheduling through Distributed
Internet of Things Controller
Ramin Firouzi, Rahim Rahmani, and Theo Kanter
Procedia Comput. Sci., vol. 184, pp. 24–32, Jan. 2021,
DOI: 10.1016/j.procs.2021.03.014.
PAPER III: Federated Learning for Distributed Reasoning on Edge Com-
puting
Ramin Firouzi, Rahim Rahmani, and Theo Kanter
Procedia Comput. Sci., vol. 184, pp. 419–427, Jan. 2021,
DOI: 10.1016/j.procs.2021.03.053.
PAPER IV: A Distributed SDN Controller for Distributed IoT
Ramin Firouzi and Rahim Rahmani
IEEE Access, vol. 10, pp. 42873–42882, Apr. 2022,
DOI: 10.1109/ACCESS.2022.3168299.
PAPER V: 5G-Enabled Distributed Intelligence Based on O-RAN for Dis-
tributed IoT Systems
Ramin Firouzi and Rahim Rahmani
Sensors, vol. 23, pp. 133–147, Dec. 2022,
DOI: 10.3390/s23010133.
PAPER VI: Delay-Sensitive Resource Allocation for IoT Systems in 5G
O-RAN Networks
Ramin Firouzi and Rahim Rahmani
Internet of Things, Submitted.
Reprints were made with permission from the publishers.
Author’s Contribution

For all papers of which I am the leading author, the main contributions to the
papers have been discussed with all authors. I developed algorithms, designed
the experiments, and wrote the main parts of the manuscript. Each author
discussed, contributed to, and approved the final manuscript for all included
publications.
Contents

Abstract i

Sammanfattning iii

Acknowledgements v

List of Papers vii

Author’s Contribution ix

Abbreviations xiii

List of Figures xv

List of Tables xvii

1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Research Aims . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Research Questions . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Summary of the Major Contributions . . . . . . . . . . . . . . 8
1.7 Dissertation Disposition . . . . . . . . . . . . . . . . . . . . . 11

2 Background and Related Work 13


2.1 Cloud Based IoT Systems . . . . . . . . . . . . . . . . . . . . 13
2.2 Enabling Edge Computing for IoT Systems . . . . . . . . . . 14
2.3 Federated Learning as Distributed Machine Learning in IoT . . 16
2.4 Communication in Distributed Intelligence . . . . . . . . . . . 19
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3 Research Methodology 23
3.1 Philosophical Assumptions . . . . . . . . . . . . . . . . . . . 23
3.2 Research Methods . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Explication of the Problem . . . . . . . . . . . . . . . . . . . 24
3.4 Definition of Requirements . . . . . . . . . . . . . . . . . . . 25
3.5 Designing and Developing the Artifact . . . . . . . . . . . . . 26
3.6 Demonstration of the Artifact . . . . . . . . . . . . . . . . . . 27
3.7 Evaluation of the Artifact . . . . . . . . . . . . . . . . . . . . 28
3.8 Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4 Summary of Papers 33
4.1 Two-Level Intelligence . . . . . . . . . . . . . . . . . . . . . 33
4.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.2 Study Design . . . . . . . . . . . . . . . . . . . . . . 34
4.1.3 Main Findings . . . . . . . . . . . . . . . . . . . . . 40
4.2 Two-Level Communication Mechanism . . . . . . . . . . . . 41
4.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.2 Study Design . . . . . . . . . . . . . . . . . . . . . . 43
4.2.3 Main Findings . . . . . . . . . . . . . . . . . . . . . 48
4.3 Distributed Intelligence for Distributed IoT . . . . . . . . . . 52
4.3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.2 Study Design . . . . . . . . . . . . . . . . . . . . . . 54
4.3.3 Main Findings . . . . . . . . . . . . . . . . . . . . . 58

5 Concluding Remarks 65
5.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

References lxxi
Abbreviations

5G The Fifth Generation of Wireless Cellular Technology


AI Artificial Intelligence
ANN Artificial Neural Network
CP Control Plan
CU Centralized Unit
DDQL Double Deep Q-Learning
DDS Data Distribution Service
DI Distributed Intelligence
DQL Deep Q-Learning
DSF Distributed SDN control plane Framework
DSR Design Science Research
DU Distributed Unit
eMBB Enhanced Mobile Broadband
EMS Energy Management System
ETSI European Telecommunications Standards Institute
FedAvg Federated Averaging
FL Federated Learning
FLC Fuzzy Logic Controller
IID Independent and Identically Distributed
IoT Internet of Things
LSTM Long Short-Term Memory
LUMP) Link Update Message Protocol
MAPE Mean Absolute Percentage Error
MDP Markov Decision Proces
MEC Multi-access Edge Computing
ML Machine Learning
mMTC Massive Machine-Based Communication
O-RAN Open Radio Access Network
RB Resource Block
RIC RAN Intelligence Controller
RL Reinforcement Learning
RMSE Root Mean Square Error
RTPS Real-Time Publish/Subscribe
SDN Software-Defined Networking
STLF Short-Term Load Forecasting
TCP/IP Transmission Control Protocol/Internet Protocol
UE User Equipment
UP User Plan
URLLC Ultra-Reliable and Low Latency Communications
VR/AR Augmented Reality/Virtual Reality
xAPP Third-Party Application
List of Figures

1.1 AI forms and considerations. . . . . . . . . . . . . . . . . . . 4


1.2 The relationships between the papers and research questions. . 8

2.1 The area of focus of this research. . . . . . . . . . . . . . . . 22

3.1 Applying the design science research method . . . . . . . . . 25

4.1 The low-level intelligent controller scheme. . . . . . . . . . . 34


4.2 Distributed intelligent gateway controller in the distributed IoT 35
4.3 The low-level intelligent control scheme. . . . . . . . . . . . . 36
4.4 Fuzzy interface system. . . . . . . . . . . . . . . . . . . . . . 37
4.5 The proposed FLC architecture. . . . . . . . . . . . . . . . . 38
4.6 Edge and cloud reasoning response time. . . . . . . . . . . . . 40
4.7 Scalability in terms of the response time. . . . . . . . . . . . . 41
4.8 Latency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.9 Required bandwidth. . . . . . . . . . . . . . . . . . . . . . . 42
4.10 The architecture of the data and control planes. . . . . . . . . 44
4.11 SDN-IoT integration. . . . . . . . . . . . . . . . . . . . . . . 45
4.12 O-RAN setup for the underlying architecture. . . . . . . . . . 46
4.13 IoT system model with two-level resource allocation. . . . . . 49
4.14 Response time versus the number of packet-in messages. . . . 50
4.15 Throughput for various numbers of controllers. . . . . . . . . 50
4.16 Total delay of URLLC services with respect to the number of
end users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.17 Total delay of eMBB services with respect to the number of
end users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.18 Aggregated throughput of URLLC services with respect to the
number of end users. . . . . . . . . . . . . . . . . . . . . . . 53
4.19 Aggregated throughput of eMBB services with respect to the
number of end users. . . . . . . . . . . . . . . . . . . . . . . 54
4.20 O-RAN setup for the underlying architecture. . . . . . . . . . 57
4.21 The smart grid scenario used in paper III. . . . . . . . . . . . 59
4.22 Delay for various numbers of IoT gateways. . . . . . . . . . . 62
4.23 Delay for various numbers of communication rounds. . . . . . 62
4.24 Accuracy for different numbers of communication rounds. . . 64
List of Tables

3.1 Methods used in each step of the design science methodology. 29

4.1 Distribution of data among nodes. . . . . . . . . . . . . . . . 59


4.2 RMSE and MAPE results for global models. . . . . . . . . . . 60
4.3 RMSE and MAPE results after personalization. . . . . . . . . 60
1. Introduction

This chapter provides a general introduction to the thesis. Specifically, it starts


by describing distributed intelligence challenges in Internet of Things (IoT)
systems and outlines the motivations for improving distributed intelligence in
IoT systems. Aiming at improving the distributed intelligence architecture in
current IoT systems, the research motivation and questions are proposed; this
is followed by a summary of the major contributions. A brief discussion of the
organization of the thesis concludes this chapter.

1.1 Background
The IoT is a technology that connects objects/things so that these things can
start communicating among themselves to provide better services for users in
an unimaginable way and make our lives easier [1]. Intelligent things could be
a set of sensors, actuators, smartphones, etc. Some IoT applications include
smart homes [2], smart cities [3], smart agriculture [4], smart healthcare [5],
etc. By 2025, it is estimated that the number of internet-connected devices
will reach over 50 billion, driven by their improved sensing capabilities and
cost-effectiveness [6]. Along with the enormous growth of the IoT, there is
an enormous amount of data, estimated to be more than 79 ZB by 2025 [7].
These data will have particular characteristics such as a higher velocity, more
modes, a higher data quality, and heterogeneity. However, most of the data col-
lected from the IoT are never utilized or analyzed. Inmarsat conducted compre-
hensive interviews with representatives hailing from 450 organizations spread
across multiple sectors, including agriculture, electrical utilities, mining, and
oil and gas, as well as transport and logistics. Surprisingly, a significant ma-
jority, comprising 86% of the respondents, conceded that their organizations
are not capitalizing on the data generated by IoT initiatives as effectively as
they potentially could [8]. The evolution of artificial intelligence (AI) and the
large amounts of data generated by devices have traditionally led to the use of
centralized cloud servers for analytics [9]. However, this approach is becom-
ing increasingly unsustainable due to several challenges. First, the growing
number of real-time services, such as self-driving cars and virtual reality /
augmented reality (VR/AR), require low latency and cannot tolerate any addi-

1
tional delays [10]. Autonomous cars, for instance, require real-time inferences
from remote servers to detect potential obstacles and activate brakes. Sending
camera frames to cloud servers for processing can result in long transmission
delays, which is unacceptable for real-time applications like self-driving cars
[11]. Second, privacy is a major concern with cloud-based AI, as users are
often reluctant to upload sensitive information to the cloud, where it can be
exposed to cybersecurity risks [12]. Third, the transfer of large amounts of
unstructured data across networks can put a strain on the network infrastruc-
ture [13]. Fourth, scalability becomes an issue, as the cloud can become a
bottleneck with an increasing number of data sources. Lastly, the utilization
of cloud servers introduces a degree of opacity to the AI processes, transform-
ing them into what is commonly termed a black box. This essentially means
that the underlying computational procedures and decision-making processes
of the AI are not clearly visible to or understandable by the end users. Con-
sequently, this lack of transparency hinders the users’ ability to discern and
rectify prediction errors. It also limits their understanding of how the AI sys-
tem is learning and evolving over time. In other words, the users’ capacity to
oversee, interpret, and troubleshoot the AI’s operation is significantly dimin-
ished when the processing is performed remotely on cloud servers [14]. To
address these challenges, pushing AI to the network edge has emerged as a so-
lution. Edge devices can handle computational tasks without exchanging data
with remote servers, which improves privacy and reduces latency and scala-
bility issues. These edge devices can range from edge servers with GPUs to
small IoT wearables with Raspberry Pi components. However, many devices
still have limited power and memory [15]. Due to the limited resources of edge
devices, it can be challenging to perform AI tasks that require a high computa-
tional load, such as training deep neural networks at the edge of the network.
One solution is to use pervasive computing [16], where various data storage
and processing capacities cooperate to perform AI tasks. This combination of
AI and pervasive computing can address the aforementioned challenges using
various concepts like federated learning and split learning that aim to distribute
AI tasks across different devices. This new field is known as pervasive AI [15]
and aims to distribute AI tasks and models over different types of devices with
different capabilities. This allows for the intelligent and efficient distribution
of AI tasks, leading to sophisticated global missions.
The concept of pervasive AI is driven by the need for AI to be accessible
and effective in various environments, including the edge and the cloud. This
requires the development of new algorithms and models that can be optimized
for different devices and their various computational and storage capacities.
As shown in Fig. 1.1, pervasive AI is a special class of distributed AI, where
the decentralization of AI models is managed using intelligent techniques that

2
consider the IoT resource constraints, the device heterogeneity, the application
context, etc. Pervasive AI represents a new era in the field of artificial intelli-
gence, where the deployment of AI models is no longer limited to the cloud or
centralized computing systems. It leverages the decentralization of AI mod-
els and the distribution of computation over various edge devices to perform
sophisticated global missions while considering the constraints of these de-
vices [16]. Pervasive AI is not limited to the distribution of AI models and
computation; it also considers the heterogeneous nature of IoT devices and the
application context [15]. Integrating AI and pervasive computing is a complex
task requiring a great deal of research in various fields, including software en-
gineering, computer science, and network science. The benefits of pervasive
AI are numerous, including reduced latency, increased scalability, and reduced
computation and memory overheads [15]. The development of 5G technology,
with its high-speed connectivity, low-latency communication, and large-scale
deployment, has paved the way for the widespread adoption of pervasive AI
and its integration into various applications. The combination of 5G and perva-
sive AI has the potential to revolutionize the AI field, bringing us closer to the
future of AI, where intelligent and connected devices are seamlessly integrated
into our lives, providing us with new and exciting experiences. 5G technology
provides the high-speed connectivity and low-latency communication required
for pervasive AI to function effectively, enabling real-time data exchange be-
tween edge devices and the cloud. Pervasive AI and 5G technology have the
potential to transform the way we interact with technology, providing us with
new and innovative experiences. However, integrating AI and pervasive com-
puting is a complex task requiring a great deal of research and development
in various fields. As the field of pervasive AI continues to evolve, we can ex-
pect new and efficient developments in this area as researchers work to address
the challenges and limitations of traditional AI techniques and unlock the full
potential of IoT systems. Interestingly, pervasive AI ties in with swarm intelli-
gence [17], a concept that draws inspiration from the social behavior of insects,
where the collective behavior of unsophisticated agents interacting locally with
their environment can cause coherent functional global patterns to emerge. In
the context of pervasive AI, each device in the network could be seen as an
individual agent. The distribution of tasks across these devices echoes the
principles of swarm intelligence, where tasks are decentralized, and computa-
tion is spread out. This allows the network to function intelligently as a whole,
even if individual devices have limited capabilities.

3
Figure 1.1: AI forms and considerations.

1.2 Motivation
The IoT is a technology that is changing the way that we live and work. The
ultimate goal of the IoT is to create a seamless and integrated network that con-
nects people, things, services, and context information in real time. To achieve
this vision, new IoT models that can handle real-time data are required to make
quick decisions and provide fast and efficient processing [18]. This represents
a juncture at which IoT applications could be made to harness the synergistic
power of edge and cloud computing. Cloud computing, in essence, refers to
the delivery of computing services, including servers, storage, databases, net-
working, software, analytics, and intelligence, over the internet. The primary
advantages of cloud computing include increased flexibility, a global scale, and
comprehensive security. However, it has its limitations, notably a high latency
and bandwidth consumption, and potential privacy issues due to the need to
transfer data to a centralized location for processing. On the other hand, edge
computing is a distributed computing paradigm that brings computation and
data storage closer to the location where they are needed to improve response
times and save bandwidth. It addresses many of the challenges inherent in
cloud computing by processing data at the edge of the network, close to the
source of the data. This proximity reduces latency, conserves bandwidth, en-
hances privacy, and allows for real-time analytics. As a result, edge computing
allows for quick decision-making and fast processing. This approach provides
several benefits, such as increased energy efficiency, improved security, greater

4
mobility, and better support for heterogeneous devices [19]. The integration of
edge and cloud computing can be seen as a crucial step toward harnessing
the beneficial features of both systems. The combination of these approaches
leverages the global reach, powerful analytics, and storage capabilities of the
cloud, with the low latency, bandwidth conservation, and real-time processing
benefits of the edge. This symbiosis significantly reinforces IoT applications,
enabling them to operate more efficiently and effectively. However, edge com-
puting has its limitations, such as limited processing and storage capabilities
[20]. This is where cloud computing can play the role of a complementary
technology. Cloud computing provides a large amount of storage space and
better processing and data analysis capabilities, making it an essential com-
ponent of the IoT ecosystem. To take full advantage of both edge computing
and cloud computing, an edge-cloud architecture is needed that combines the
strengths of both types of computing into one system. This will reinforce IoT
applications and make them more powerful and flexible.
Despite its potential, this two-level architecture is still in the early stages of
development, and many challenges lie ahead. Both academia and industry are
actively researching and developing new solutions related to two-level intelli-
gence to address the aforementioned challenges. The integration of edge and
cloud computing is an important step toward including the beneficial features
of both types of computing in one system to reinforce IoT applications [21].

1.3 Problem Statement

The exponential growth of IoT devices has led to the generation of vast amounts
of data, but most of these data remain unused and unanalyzed. The centraliza-
tion of AI and analytics in cloud servers is becoming unsustainable due to net-
work infrastructure strain, scalability issues, and a lack of transparency. The
limited resources of edge devices also make it challenging to perform complex
AI tasks, leading to the need for the development of pervasive AI. However,
integrating AI and pervasive computing in the presence of high-speed com-
munication technology (5G/6G) is a complex task requiring interdisciplinary
research and development in software engineering, computer science, and net-
work science. Despite the potential benefits of pervasive AI, numerous chal-
lenges still need to be addressed, including effectively distributing AI models
and computation over heterogeneous devices and considering the application
context.

5
1.4 Research Aims
This research aims to delve into the complexities of enabling distributed intel-
ligence in IoT systems through edge computing and to understand the elements
required for such systems to function effectively. Following this, this research
seeks a suitable solution, considering the requirements and aforementioned
challenges, for implementing such a system. The research focuses on analyz-
ing different system architectures, selecting appropriate reasoners, determin-
ing efficient communication methods, and finding effective ways to organize
controllers.
Edge computing is a rapidly growing area of technology that has the poten-
tial to revolutionize the way IoT systems are designed and implemented. By
leveraging edge computing, it becomes possible to process data at the source
rather than sending data to a remote server for processing. This can result in
faster processing times and lower latency, making edge computing an ideal
solution for IoT systems.
This research delves into the exploration of various system architectures
suitable for implementing distributed intelligence within IoT systems. The
aim is to evaluate the applicability of diverse reasoning mechanisms for these
systems. When we refer to a reasoner, we are essentially discussing the core
logic processing component of an AI system. It is this reasoner that applies
rules or algorithms to data to derive conclusions, make decisions, or trigger
actions. Reasoners can utilize various methodologies, including rule-based
systems, which operate based on predefined rules; fuzzy logic systems, which
handle reasoning that is approximate rather than precise; and machine learning
algorithms, which learn patterns from data to make predictions or decisions.
This research also delves into the analysis of various communication method-
ologies. These methodologies facilitate the transfer of data and control signals
between distinct components within the system, ensuring seamless and effi-
cient operations.
This research will also focus on finding effective ways to organize the con-
trollers in these systems. The controllers play an important role in the func-
tioning of an IoT system, as they are responsible for receiving and processing
data, making decisions, and executing actions. Thus, it is important that the
controllers are organized in an efficient and scalable way.

1.5 Research Questions


This research aims to answer the following question:

RQ: How can distributed intelligence be enabled using edge computing in

6
IoT systems?

This question is comprised of the following sub-questions. This section also


specifies which of the papers included in this thesis have addressed each sub-
question:

RQ1: What system architecture can enable distributed intelligence in the IoT,
and how would it behave in a real-life scenario?
This sub-question addresses the need to identify a suitable system archi-
tecture for enabling distributed intelligence in IoT systems. The aim of
this sub-question is to evaluate the performance of the proposed system
architecture in a real-life scenario and to determine how it would behave
in such a scenario.
Papers I and II deal with this question.

RQ2: What reasoner is suitable for distributed intelligence in the IoT, and how
can it be applied at the edge of the network and integrated into the archi-
tecture?
This sub-question focuses on the selection of a reasoner that is suitable
for distributed intelligence in IoT systems. This sub-question aims to
determine the most suitable reasoner for the purpose of this research,
customize it to meet the system requirements, and integrate it into the
proposed system architecture.
Papers I, II, and III deal with this question.

RQ3: How and to what extent can the scalability of distributed intelligence
be facilitated in terms of communication between edge devices and the
cloud?
This sub-question examines the issue of scalability and fast dissemina-
tion for distributed intelligence in IoT systems. The aim of this sub-
question is to determine how the scalability and fast dissemination of
distributed intelligence can be facilitated through communication be-
tween devices and internet networks and to evaluate the extent to which
this can be achieved.
Papers IV, V, and VI deal with this question.

RQ4: How can the controllers be organized with maximal autonomy of the
system?
This sub-question addresses the need to effectively organize controllers
in IoT systems. The aim of this sub-question is to determine how the
controllers can be organized with maximal autonomy and to evaluate the

7
Figure 1.2: The relationships between the papers and research questions.

effectiveness of such organizations in achieving distributed intelligence


in IoT systems.
Papers IV and VI deal with this question.

Figure 1.2 shows the relationships between the papers and the research
questions.

1.6 Summary of the Major Contributions


The contributions of this thesis address the above questions and span six stud-
ies that establish the state-of-the-art distributed intelligence architecture to be
applied at the edge of the network in IoT systems. The scientific contribu-
tions of these studies, which were published in peer-reviewed conferences and
journals, are summarized below:

PAPER I: An Autonomic IoT Gateway for Smart Home Using Fuzzy Logic
Reasoner
This paper presents a novel approach for enhancing IoT applica-
tions using a combination of cloud and edge computing. It pro-

8
poses the use of an IoT edge controller with a fuzzy logic con-
troller to provide low-level intelligence at the edge, which would
reduce the response time and network load. The proposed con-
troller allows the IoT gateway to manage input uncertainties and
improve its performance over time through learning experiences at
the edge. The simulation of a smart home scenario shows the feasi-
bility of the proposed approach in reducing latency and increasing
accuracy.

PAPER II: Distributed-Reasoning for Task Scheduling through Distributed


IoT Controller
This paper focuses on the use of distributed reasoning for the IoT
to improve real-time monitoring, optimization, fault tolerance,
traffic, healthcare, etc. It proposes a two-level intelligence scheme
that leverages edge computing to improve the distributed IoT. The
scheme shifts streaming processing from the cloud to edge de-
vices to reduce latency and improve the performance of smart IoT
applications. The proposed IoT gateway controller provides low-
level intelligence using a fuzzy abductive reasoner to overcome
data uncertainties and enable better, reliable, and flexible stream-
ing analytics. Numerical simulations support the feasibility of the
proposed approach.

PAPER III: Federated Learning for Distributed Reasoning on Edge Com-


puting
This paper addresses the challenge of performing machine learn-
ing at the network edge, where IoT and crowd-sourcing generate
large amounts of data. It proposes using federated learning (FL)
at edge servers and gateways to build a central model without
uploading raw data to a centralized location. The paper analyzes
the convergence bound of distributed gradient descent and pro-
poses a control algorithm to determine the best trade-off between
local updates and global parameter aggregation to minimize the
loss function. The proposed approach is evaluated via extensive
experiments with simulated datasets, which show that it performs
near the optimum with various configurations.

PAPER IV: A Distributed SDN Controller for Distributed IoT


This paper presents a distributed intelligence approach for the
IoT that overcomes the challenges of the traditional centralized
cloud computing approach. It aims to distribute decision-making
tasks to edge devices within the network to improve performance,

9
reduce latency, and enhance security and privacy. The approach
integrates transport network control with edge and cloud resources
to provide dynamic and efficient IoT services. A distributed
software-defined networking (SDN)-based architecture for the
IoT is introduced, and an algorithm for selecting clients for FL is
proposed. The architecture is then deployed and evaluated from
multiple perspectives to show the effectiveness of the proposed
approach in providing distributed intelligence at the edge of the
network.

PAPER V: 5G-Enabled Distributed Intelligence Based on O-RAN for Dis-


tributed IoT Systems
This paper proposes a method for deploying and optimizing FL
tasks in an open radio access network (O-RAN) to deliver dis-
tributed intelligence for 5G applications. The method utilizes re-
inforcement learning (RL) for client selection and resource allo-
cation through RAN intelligence controllers (RICs). A slice is as-
signed for training based on the selected clients for each FL task.
The simulation results show that the proposed method improves
the performance of FL, compared to the traditional federated aver-
aging (FedAvg) algorithm, in terms of convergence and commu-
nication rounds. The implementation of the proposed method in
an O-RAN will help in ensuring the efficient and effective train-
ing of FL models, which can be used to make intelligent decisions
in the 5G network.

PAPER VI: Delay-Sensitive Resource Allocation for IoT Systems in 5G


O-RAN Networks
This paper proposes a two-level network slicing mechanism us-
ing the O-RAN architecture to optimize the performance of both
enhanced mobile broadband (eMBB) and IoT services sharing
the same RAN. At a high level, an SDN controller allocates radio
resources to gNodeBs based on the requirements of eMBB and
ultra-reliable and low-latency communications (URLLC) services.
At a low level, each gNodeB allocates its available resources to
its end users and requests additional resources from adjacent gN-
odeBs if needed. The problem is solved using the exponential
weight algorithm for exploration and exploitation (EXP3) and
the multi-agent deep Q-learning (DQL) algorithm. The simula-
tion results show that the proposed mechanism can effectively
manage the slicing process in real time and provide isolation be-
tween eMBB and IoT services.

10
1.7 Dissertation Disposition
The dissertation is organized into chapters as follows:
Chapter 1, Introduction, briefly introduces the research area of this work. It
further presents the research problems and questions.
Chapter 2, Background and Related Work, provides extended background in-
formation and also related work on the topics of the included publications.
Chapter 3, Research Methodology, discusses the research approach and meth-
ods applied in this thesis, including methods for evaluation and research ethics.
Chapter 4, Summary of Papers, summarizes and presents the main findings
and contributions of the included publications.
Chapter 5, Concluding Remarks, discusses the results and limitations of this
work. This chapter also presents the conclusions of the research and sugges-
tions for future work.

11
12
2. Background and Related Work

The term IoT was first introduced by Kevin Ashton, the Executive Director
of Auto-ID Labs at the Massachusetts Institute of Technology, during a talk
he gave to Procter and Gamble in 1990 [22]. In his speech, he emphasized
the limitations of human beings in terms of attention, time, and accuracy in
producing all the data available on the web. He suggested that computers
should be used to gather and analyze data without human assistance in order
to fully understand all things. Over time, the IoT has evolved into systems that
utilize technologies such as the internet, wireless communication, microelec-
tromechanical systems, and embedded systems. This chapter explores various
works related to IoT architecture and the integration of intelligence into the
edge layer.

2.1 Cloud Based IoT Systems


Currently, a significant challenge in the IoT is deriving insights from the raw
data collected. The original purpose of the IoT was to gather data from con-
nected devices and send it to the cloud for further processing, analysis, and
decision-making. Many solutions follow this model, where data are transmit-
ted from the environment to the cloud for storage, management, and decision-
making, as it offers benefits such as large storage capacity, advanced process-
ing capabilities, and remote access. For instance, Hassanalieragh al. [23] pro-
posed a cloud-based health monitoring and management system that consists
of three main components. The first component is a data acquisition system
that utilizes wearable sensors to gather physiological biomarkers. The sensors
transfer the data to the network through an intermediate data aggregator such
as a smartphone. Second, the data transmission components transfer patients’
records to the health data center in real time. Finally, the cloud processes
components that store, analyze, and visualize the data. Over the last decade,
many researchers discussed and proposed IoT and cloud integration models.
The studies in [24] and [25] demonstrate the need to integrate the IoT with
the cloud by presenting a deep understanding of the integration between cloud
computing and the IoT. They also provide an overview of the current research
related to this topic. Babu et al.[26] presented an architectural design for in-

13
tegrating the cloud and IoT called the Cloud-Assisted and Agent-Oriented for
the IoT. This design has three main components: a smart interface agent (SIA),
smart user agent (SUA), and smart object agent (SOA). The SIA interacts with
external IT systems. The SUA models users in the context of a specific intelli-
gent system; users can establish a certain request service using a GUI provided
by the SUA. The SOA models the physical environment, and it is supported
by a cloud computing platform. Li et al. [27] employed the Topology and Or-
chestration Specification for Cloud Applications (TOSCA) standard for cloud
service management to describe the elements and configurations of IoT appli-
cations systematically. The TOSCA standard describes in detail the topology
of application components and the implementation process of the applications.
All the previously mentioned models are totally cloud-based, meaning that
they need to transfer all raw data to the cloud for processing and decision-
making. However, cloud-based systems cause a high level of latency for IoT
applications [28; 29]. Moreover, none of the previously mentioned papers dis-
cuss processing raw data collected from end devices at the edge layer to learn,
make decisions, and take action without sending the data to the cloud. There-
fore, researchers have started to look at edge computing to process the data
close to connected devices, as it provides fast processing, energy efficiency,
reliability, security, and privacy [30].

2.2 Enabling Edge Computing for IoT Systems


A large amount of work is being performed in the field of enabling edge com-
puting to process the raw data being generated and make decisions. Some of
the work focuses on enabling an IoT gateway at the edge of the network to
handle and manage IoT data. For example, Mueller et al. [31] introduced
a SwissQM/SwissGate system to program, deploy, and operate wireless sen-
sor networks. They proposed a gateway called SwissGate and applied it to
smart home applications. Jong-Wang et al. [32] also developed a sensor net-
work system that mainly consists of one main server and several gateways to
connect several sensor networks. Designing such a system requires many con-
figurations and high hardware costs. In another work, Bimschas et al. [33]
presented middleware for a smart gateway to run different applications, such
as protocol translation and request caching with sensor discovery. For gen-
eral IoT applications, Guoqiang et al. [34] introduced a general-purpose smart
IoT gateway that supports several communication protocols to translate dif-
ferent sensor data and external interfaces for flexible software development.
In order to use smartphones as a gateway, Bian et al. [35] utilized an An-
droid phone as a temporary smart home gateway that can predict user behavior
to shut down unused devices. This work aims at providing a dynamic home

14
gateway that can reduce the wasted energy of a smart home. In the health-
care IoT application domain, Shen et al. [36] presented an intelligent 6LoW-
PAN border router that connects the healthcare sensors with an IP network and
uses a hidden Markov model to make local decisions concerning health states.
Stantchev et al. [37] introduced a three-tier architecture (cloud, gateway, and
smart items) to enable servitization for smart healthcare infrastructure. Servi-
tization is the trend of convergence between manufacturing and the service
sector. Rahmani et al. [38] also proposed an intelligent e-health IoT gateway
for remote health-monitoring systems at the network edge in a three-layer ar-
chitecture. The gateway is able to provide several services, such as real-time
data processing, local storage, and data mining. In another work, Azimi et al.
[39] proposed a hierarchical model for IoT health-monitoring systems based
on the MAPE-K [40] computing model introduced by IBM. The model uses
fog and cloud computing to partition and execute machine learning data ana-
lytics. Many works have tried to overcome the intelligence challenges of edge
computing. For example, Badlani et al. [41] introduced smart home systems
based on ANNs (Artificial Neural Networks). They aim to reduce power con-
sumption by analyzing human behavior patterns. Badlani et al. trained a single
perceptron network to control the fan using random temperature and humid-
ity values. In order to help disabled persons, Hussein et al. [42] presented
a self-adapting intelligent home system that helps disabled people overcome
their impediments based on a neural network. They used a feedforward neural
network to design an intelligent fire alarm system. They also used a recurrent
neural network to learn user habits. However, building such a system requires
a large amount of configuration as it needs a server at the edge to process and
save the data. Furthermore, the number of trained and tested samples was
small. In another work, Mehr et al. [43] studied the human activity detection
performance of three ANN algorithms: batch backpropagation, quick propa-
gation, and the Levenberg-Marquardt algorithm. The results illustrated that
the Levenberg-Marquardt algorithm is the most effective. Park et al. [44] also
presented a residual-recurrent neural network architecture for a smart home
to predict human activities. They evaluated the proposed system using the
Massachusetts Institute of Technology’s dataset. To counter the intelligence
challenges in IoT gateway, Wang et al. [45] suggested a smart gateway frame-
work for smart homes consisting of the home layer, gateway layer, and cloud
layer. While the gateway performs data collection, awareness, and reporting,
the cloud stores the reported data and adjusts the data collection and awareness
policy. Another work by Calegari et al. [46] suggested using logic program-
ming as a service (LPaaS) to provide reasoning services for IoT applications.
They mentioned that LPaaS can enhance non-symbolic techniques in order
to achieve distributed intelligence. A recent distributed intelligence approach

15
was presented by Rahman et al. [21]; they proposed a distributed intelligence
(DI) model for an IoT gateway based on a belief network and reinforcement
learning (RL) to learn, predict, and make a decision. This system is initially
based on a small number of predefined rules; then, the system can change the
rules based on past experiences. Another recent DI approach was proposed
by Allahloh et al. [47]; it is an intelligent oil and gas field management and
control system based on the IoT. It uses currently available technologies such
as SCADA and LabVIEW which are installed on the workstations and mi-
crocontrollers connected to wireless networks. Alsboui et al. [48] proposed
a mobile-agent distributed intelligence tangle-based architecture that is able
to support multiple IoT applications. The tangle is a flow of interconnected
and individual transactions. These transactions are stored across a decentral-
ized network. The architecture consists of IoT devices, a tangle to process
transactions, a proof of work server (PoW), and a mobile agent. IoT devices
are connected together through transmission control protocol/internet protocol
(TCP/IP) protocols for communication. They communicate with the Tangle to
manage processes and store data efficiently. A PoW server is an IoT device
that performs cost computations on behalf of IoT devices. The mobile agent
supports inter-node communications by transferring a set of transactions when
passing through nodes on its path. As mentioned above, a large amount of
research is being performed on the IoT gateway located between the cloud and
end devices. Although a large amount of work is being done to enable the im-
plementation of the gateway at the edge layer, there have only been small im-
provements in providing intelligence to the gateway by extracting knowledge
from the raw data at the IoT gateway to make decisions locally. Moreover, only
a small number of papers discussed automatic offloading from the gateway to
the cloud in overload situations. In other words, few pieces of research have
been performed that focus on collaboration between the IoT gateway and the
cloud when the gateway is unable to analyze data, make decisions, and act.

2.3 Federated Learning as Distributed Machine Learning


in IoT
Distributed machine learning (ML) attempts to address several challenges,
such as data privacy, communication efficiency, and scalability, particularly
due to the increasing proliferation of edge devices and rising concerns about
privacy and data protection. Federated learning (FL) and split learning are
examples of techniques developed in this field.

• Federated Learning: FL is designed to learn from decentralized data


without explicit data sharing. Instead of sharing data, FL shares model

16
parameters or updates across devices. This inherently respects data pri-
vacy and complies with data localization regulations. However, design-
ing effective FL algorithms can be challenging without access to raw
data due to the need to account for issues like data heterogeneity, com-
munication efficiency, and straggler handling. Moreover, while FL re-
duces the need to share data, it does not eliminate privacy risks entirely.
Sophisticated attacks can potentially infer sensitive information from
shared model updates, necessitating additional measures like secure ag-
gregation and differential privacy.

• Split Learning: This distributed learning approach enables the training


of deep learning models across multiple devices or nodes without shar-
ing raw data. In this method, the neural network is divided into two
parts: the first part resides on the edge devices, while the second part
is hosted on a central server. The edge devices process their data using
the first part of the neural network and send intermediate feature rep-
resentations (not the raw data) to the central server. The central server
then aggregates these feature representations, performs the remaining
computations using the second part of the neural network, and updates
the model accordingly. By sharing only intermediate feature represen-
tations rather than raw data, split learning helps protect data privacy,
reduce communication overhead, and maintain a balance between local
computation and data transmission [49].

Each of these techniques has its strengths and its use cases. For instance, split
learning is particularly useful when data privacy is paramount, or when band-
width constraints and communication costs are significant factors. However, it
is important to note that although both FL and split learning reduce the need
for data sharing, they do not completely eliminate privacy risks [50].
In addition, there are other privacy-preserving techniques in the context
of distributed ML, such as data anonymization and differential privacy. How-
ever, these techniques also have their own challenges. Although it is seemingly
straightforward, anonymization can be defeated by de-anonymization attacks
through data linkage. Differential privacy, which is a more sophisticated tech-
nique that introduces noise into data or computations to preserve privacy, re-
quires a careful balance between data privacy and utility. Moreover, sharing
even anonymized or differentially private data can still be a concern in certain
stringent regulatory environments or when dealing with highly sensitive data.
Therefore, while these techniques can facilitate some level of data sharing,
they might not be ideal or applicable in all scenarios.
Federated and split learning are primarily designed to address such sce-
narios, enabling learning from data while avoiding explicit data sharing. This

17
requires a balance between maintaining data privacy and enabling effective
model training. It is true that designing FL algorithms can be challenging in
the absence of client data, but this is the trade-off for the higher level of privacy
that FL aims to provide.
However, FL presents a potentially more advantageous option in certain
respects. Unlike split learning, which divides the model across devices and
a central server (sequential training), FL allows each device to independently
train a global model based on its local data (parallel training), hence reducing
the complexity of the learning process [51]. By aggregating only the model
updates from each device, FL helps ensure data privacy while keeping the full
raw data local. Additionally, this approach addresses the issue of communica-
tion efficiency by transmitting only these model updates rather than large in-
termediate feature representations. The ability to train models in a distributed
manner, reducing the system’s reliance on a central server for computation,
also enhances the scalability of the learning process. This makes FL an ideal
choice in scenarios in which data security is critical, communication resources
are limited, and the IoT system consists of a large number of devices with var-
ious computational capacities. Given these substantial advantages in terms of
preserving data privacy, enhancing communication efficiency, and facilitating
scalability, FL has been chosen as the primary technique for this research. This
choice makes it possible to navigate the complexities of distributed learning in
an IoT ecosystem effectively while capitalizing on the inherent capabilities of
individual devices.
Federated learning was first introduced by McMahan et al. [52], and its
vanilla algorithm is called federated stochastic gradient descent (FedSGD),
where clients locally train a model with SGD in each round and then upload
it to the server for aggregation. An improved version of FedSGD is called
FedAvg, where clients can synchronously execute multiple rounds of SGD in
each training round before uploading the model to the server. This leads to
fewer rounds of communication and higher efficiency of federated learning.
Despite the benefits of FL, there are still challenges. For example, the fre-
quent uploading of models can lead to higher communication overhead that
decreases efficiency. To address this issue, McMahan et al. [53] proposed
FedAvg. In [54], Nishio and Yonetani focused on the problem of selecting
clients with resource constraints to reduce the communication time. They pro-
posed a protocol that assigns clients a deadline for downloading, updating, and
uploading ML models, improving the efficiency of the entire training process.
Non-independent and identically distributed (IID) data in FL can also af-
fect performance and efficiency, as discovered by Zhao et al. [55]. Wang et
al. [56] studied the federated learning convergence bound from a theoretical
perspective and proposed a control algorithm that identifies the appropriate

18
tradeoff between local updates and global parameter aggregation.
Sattler et al. [57] proposed an adapted sparse ternary compression (STC)
framework to reduce the communication cost for FL in the presence of non-
IID data. Wang et al. [58] analyzed the connection between the distribution of
training data on a device and the trained model weights based on that data and
proposed an RL system that uses deep Q-learning to select a subset of devices
in each communication round to maximize the reward, thereby promoting a
higher validation accuracy.
Finally, Zhang et al. [59] developed an efficient FL framework leveraging
multi-agent reinforcement learning (FedMARL) to improve the model accu-
racy, processing latency, and communication efficiency. However, FedMARL
may raise the computational load of nodes, making it unsuitable for circum-
stances where computing is limited.
Overall, while FL has shown great potential for enabling intelligent IoT
services at the network edge, further research is still needed to explore strate-
gies that can mitigate or resolve the challenges faced in practice.

2.4 Communication in Distributed Intelligence


Numerous technologies have been developed to facilitate edge computing im-
plementations within the 5G network infrastructure, specifically at the network
edge. Two prominent and widely-recognized technologies in this domain are
Open Radio Access Network (O-RAN) [60] and European Telecommunica-
tions Standards Institute multi-access edge computing (ETSI MEC) [61]. The
O-RAN is an advanced, open, and intelligent RAN architecture that aims to
transform traditional, closed, and proprietary RAN systems into a more flexi-
ble, scalable, and cost-effective network infrastructure [62]. The O-RAN Al-
liance, an industry consortium, promotes the adoption of open, interoperable
interfaces and RAN virtualization to enable multi-vendor environments, fos-
ter innovation, and optimize network performance. The O-RAN facilitates the
disaggregation of hardware and software components, allowing operators to
mix and match equipment from different vendors and deploy advanced fea-
tures such as AI/ML-driven optimization, network slicing, and energy effi-
ciency [63].
ETSI MEC is a standardized framework developed by the ETSI that en-
ables the deployment of computing resources at the edge of the network, closer
to end-users and devices [64]. By processing data near the data source, MEC
significantly reduces latency, improves the user experience, and enables the de-
velopment of new services that require real-time processing capabilities [65].
MEC can be utilized across various access technologies, including 5G, LTE,
and fixed networks, and it supports a diverse range of applications, such as

19
the IoT, augmented reality, virtual reality, and autonomous vehicles [66]. The
O-RAN and ETSI MEC can be complementary in a 5G network deployment
[67]. An O-RAN architecture with open interfaces and virtualized components
can provide the necessary flexibility and scalability for the implementation of
ETSI MEC solutions [67]. By integrating MEC capabilities into an O-RAN-
based network, operators can further optimize the network performance, de-
liver ultra-low latency services, and enable new business opportunities and
use cases. However, in this research we focus on the O-RAN architecture as
it promotes interoperability and breaks down barriers created by proprietary
systems while MEC focuses on bringing computational resources closer to
the network edge. O-RAN architectures are designed to be flexible and pro-
grammable, making it easier to add new features or make adjustments to the
network [68]. This flexibility can foster greater innovation in network design
and functionality. Additionally, the O-RAN supports the development of an
open and competitive ecosystem where multiple vendors can contribute. This
competitive environment can lead to more choices, lower prices, and more in-
novative solutions for network operators.
On the other hand, over the past decade, many researchers have proposed
software-defined networking (SDN) based architectures for the IoT to make
communication and resource allocation smoother at the core and edge of the
network. However, several challenges still need to be addressed in deploy-
ing such architectures in heterogeneous cross-domain network environments.
One major problem in deploying SDN in the IoT is the lack of a standard-
ized communication method for east/west interfaces between distributed SDN
controllers. To overcome this challenge, Almadani et al. [69] proposed a dis-
tributed SDN control plane framework (DSF) that applied a data-centric real-
time publish/subscribe (RTPS) communication model between control plane
entities. In this framework, the control plane entities exchanged link discovery
updates in the form of a link update message protocol(LUMP) message that
enabled the routing of data packets across multiple domains. Li et al. [70]
proposed an SDN-based architecture for the IoT in which the IoT gateway was
SDN-enabled. They used an open programmable interface that enabled IoT
gateways to be upgraded on the fly when a new application was introduced.
They also defined further gateway functions such as node management, pro-
tocol conversion, and security control. However, the proposed gateway did
not have a distributed architecture, and multiple applications were prevented
from using the same gateway. Salman et al. [19] proposed an SDN-based ar-
chitecture for the IoT that aimed to implement SDN in the fog layer to reap
the benefits of both paradigms. The gateways in the proposed architecture
provided key gateway services by ensuring compatibility between different
communication protocols and heterogeneous networks. The proposed archi-

20
tecture also included a standard northbound interface and a programmability
feature that enabled dynamic gateway updates. However, the gateway did not
have a distributed structure. In [71], Muñoz et al. proposed an architecture for
the optimal distribution of IoT processes to the edge of the network based on
network resources by integrating SDN with the IoT. This architecture enabled
better control over IoT flows and allowed for the use of techniques for the
avoidance of traffic congestion. They also proposed and experimentally evalu-
ated the architecture of an SDN-enabled container-based edge node, which was
developed to enable seamless interaction with IoT-enabled SDN and a cloud
orchestration platform. Hakiri et al. [72] presented a data-centric architecture
based on a symbiotic relationship between Data Distribution Service (DDS)
and SDN, with the aim of enabling agile and flexible network orchestration. A
DDS northbound interface was added to the SDN controller to provide it with
all the necessary functions of IoT network applications and enable network-
agnostic support for IoT systems. However, their work did not include a per-
formance analysis that measured indicators such as communication overheads
or message delays. Qureshi et al. [73] developed a distributed SDN approach
to improve the overall energy efficiency of smart grid systems in which the
controllers are logically centralized but physically distributed. They proposed
a technique that allowed the data plane to increase the response time of the
controllers, and the controller determined whether the data needed a prioritized
flow to be sent locally or globally, using elephant or mice flows. Gonzalez et
al. [73] proposed an open flow-based clustering management system that man-
aged communication between clusters through cluster heads, which were SDN
controllers (SDNCHs). They also proposed a routing protocol for communi-
cation between SDNCHs in the SDN environment and developed a testbed to
evaluate the proposed protocol. However, their work did not include a perfor-
mance analysis that measured indicators such as communication overheads or
message delays. In summary, none of these works considered performing data
processing using the IoT gateway at the edge of the network.

2.5 Summary
The IoT is a network of interconnected devices that can communicate with
each other and exchange data. One of the key challenges in the IoT is man-
aging the large amounts of data generated by these devices. Distributed intel-
ligence is a solution to this problem that involves distributing the intelligence
among the devices themselves rather than relying on a centralized cloud-based
system. In this approach, ML models are trained locally on the devices them-
selves, using data generated by the devices. FL is a popular technique for
distributed ML in IoT, enabling privacy-preserving model training on data dis-

21
Figure 2.1: The area of focus of this research.

tributed across multiple devices. Despite the benefits of federated learning,


there are still challenges, such as the communication overhead and non-IID
data. Researchers have proposed various strategies to address these challenges,
such as adapting compression frameworks and RL systems to select a subset
of devices in each communication round to maximize the reward. However,
further research is still needed to explore strategies that can mitigate or resolve
the challenges faced in practice. Communication is another key challenge in
distributed intelligence in IoT. Researchers have proposed various architec-
tures for implementing SDN in the IoT, which enables better control over IoT
flows and allows for the use of techniques for the avoidance of traffic conges-
tion. However, there are still challenges in deploying SDN in IoT, such as the
lack of a standardized communication method for east/west interfaces between
distributed SDN controllers. Researchers have proposed various solutions to
address this challenge, such as a DSF that applies a data-centric RTPS com-
munication model between control plane entities. Distributed intelligence is a
promising solution for managing large amounts of data generated by IoT de-
vices. This interdisciplinary field involves AI, fixed and mobile networks, edge
computing, and the IoT, resulting in new challenges. As shown in Fig.2.1, con-
sidering the related works that have been discussed, the focus of this research
is to address the challenges mentioned in this chapter by leveraging the benefits
of research each area.

22
3. Research Methodology

This chapter outlines the fundamental philosophical and methodological prin-


ciples that form the basis of the research and experimental methods employed
in this study. It begins by discussing the philosophical assumptions that un-
derpin this research; this is followed by a description of the research methods
utilized. Finally, the chapter concludes by addressing ethical concerns relevant
to this study.

3.1 Philosophical Assumptions

Scientific investigation involves a set of philosophical beliefs that a research


community utilizes. In the realm of computer science, these philosophical
beliefs are typically divided into either positivist or interpretivist viewpoints
[74]. Positivism is founded on the idea that a singular reality exists within
human consciousness, and this reality is universally applicable to all observa-
tions. Knowledge about this reality is objectively attained through observation
and experimentation, with axiology being used for hypothesis testing and value
prediction, and methodology determining the approach used to gain knowledge
through experimentation and analysis. This research adheres to positivism and
seeks to construct a model that implements distributed intelligence in the IoT
field. Research on the IoT centers around verifying hypotheses based on es-
tablished theories that have been tested empirically. In this study, hypotheses
about developing a distributed intelligence system intended to assist in the au-
tonomic management of the IoT are tested through experimentation. There-
fore, in this study, design science research (DSR) will be used as the research
paradigm to construct a model that implements distributed intelligence in the
IoT field. The model will be tested through experimentation, with a focus
on verifying hypotheses based on established theories that have been tested
empirically. Specifically, the hypotheses will revolve around developing a dis-
tributed intelligence system intended to assist in the autonomic management
of the IoT.

23
3.2 Research Methods
Brocke et al. define DSR as a research framework in which a designer ad-
dresses issues relevant to human problems by developing innovative artifacts
that contribute new knowledge to the scientific community [75]. The objective
of DSR is to enhance the environment by creating a new artifact. The en-
vironment, which encompasses people, organizational systems, and technical
systems, provides insight into potential issues and opportunities. This insight
can be articulated in terms of requirements that justify the necessity for a new
artifact. The foundation for developing a new artifact can be established by
drawing on the existing body of knowledge, which may include scientific the-
ories and methods, experience and expertise, and meta-artifacts. A new artifact
can be created based on the requirements and the existing body of knowledge.
Design science is focused on creating functional solutions that generate and
convey new knowledge, following a process that includes several steps and it-
erations, as shown in Fig. 3.1, ranging from the explication of the problem
to the evaluation of the designed and developed artifact based on the specified
requirements. While the order of the design science steps is similar to the se-
quential waterfall model, each step relies on the outputs of previous steps, and
the process adopts an iterative and agile approach to develop the solution, di-
viding the process into phases and disciplines, where almost all disciplines are
involved in each phase. This thesis also follows the DSR approach. This study
aims to develop a model, that is, an artifact, which can be evaluated quantita-
tively with respect to existing solutions, and the developed model’s usefulness
and novelty, by applying deductive reasoning to each developed sub-artifact,
can be demonstrated and evaluated, making it possible to accept or reject the
model by comparing it with existing solutions or standards when applicable.

3.3 Explication of the Problem


The initial phase of DSR involves defining the research problem by asking
questions, similar to traditional scientific research. This step has been done
through observation and exploration. Observation and exploration are valu-
able techniques for identifying and exploring problems in the context of DSR.
This process is outlined in the Introduction and Background sections of the
accompanying publications. To create the artifact, a system is built based on
existing standards, tests, and results. One such system is showcased in paper
I, which presents an architecture for managing distributed intelligence in IoT
systems. However, due to the overwhelming amount of data generated by IoT
devices, centralized approaches have proven to be insufficient. As a result, a
distributed approach has been proposed to address this challenge. Neverthe-

24
Figure 3.1: Applying the design science research method (adapted from [74]).

less, with the anticipated increase in the number of IoT devices, scalability
remains a significant issue that needs to be resolved. Cloud computing has
been the preferred choice for managing computing in the IoT. However, the
IoT demands real-time computing with minimal latency, which cloud com-
puting struggles to deliver. The IoT encompasses several applications, such
as smart homes, smart health, smart farming/agriculture, smart parking, and
smart retail. These applications require quick decisions based on raw data
collected from sensors. To avoid delays in decision-making caused by cloud
computing, it is necessary to make rapid decisions as close to the devices as
possible, i.e., at the edge. Hence, solutions that provide real-time computing
with low latency are necessary. This challenge is further addressed in papers
I–VI. Once a model is created to address these challenges, it must be tested.

3.4 Definition of Requirements


As stated in [74], the second step of DSR is to outline an artifact to address the
problem formulated and analyzed in the “Explication of the Problem” step and
transform the problem into requirements for the proposed artifact. The output
of the previous step concerns the need for an architecture for implementing
distributed intelligence in IoT systems, which serves as the input for the “Def-
inition of Requirements” step. The requirements for the artifact outlined in the
accompanying publications focus on the development of approaches, methods,
algorithms, and relationships that can address the identified problem in the IoT
environment. The requirements for the artifacts have been analyzed through

25
literature reviews in papers I to VI. The first requirement is the development
of an architecture that can facilitate the implementation of distributed intelli-
gence, which can be realized through a two-tier solution (papers I and II). This
architecture should enable computing to be done as close to the connected
devices as possible while also being capable of distributing and delegating
computational tasks from cloud computing to the edge. The next challenge is
finding a suitable reasoner for processing data at the network edge (papers I, II,
and III). New algorithms are needed to enable such a model, which should be
based on a distributed communication method to enable a distributed approach
at scale (papers IV, V, and VI). To integrate two-level intelligence with the IoT,
new algorithms are required to enable self-configuration, self-optimization,
etc. A set of controllers based on cloud and/or edge computing can execute
such algorithms to enable communication for the distributed system (papers
IV, V, and VI).

3.5 Designing and Developing the Artifact


The next stage of DSR involves designing and creating an artifact to tackle
the research problem, building on the activities performed in the preceding
stages. The present study comprises a compilation of research papers, each
of which outlines the creation of artifacts tailored to the issues identified in
the problem statement section and the additional requirements listed in the
previous section. The final artifact is an amalgamation of the individual sub-
artifacts. The agile development methodology was employed to develop each
sub-artifact through several iterations, enabling improvements in functional-
ity with each iteration while ensuring alignment with the problem statement
and defined requirements. The overall artifact was also tracked throughout
the development process. This thesis introduces a system model design that
facilitates distributed intelligence in IoT systems and is in line with the spec-
ifications outlined in papers I and II and the preceding sections. The design
adopts a two-tier hierarchical structure, with edge computing being utilized in
the lower tier and cloud computing being utilized in the upper tier. Papers I and
II also propose an approach for providing low-level intelligence for IoT appli-
cations through an IoT edge controller that leverages the fuzzy logic controller,
along with edge computing. The proposed architecture includes fuzzy logic
reasoning as the data processing engine, as it is very valuable for dealing with
uncertainty. To enable ML at the edge of the network, in paper III, federated
learning (FL) was employed as a distributed engine for data processing. This
decentralized ML methodology increases the amount and variety of data used
to train deep learning models. Next, a distributed SDN-based IoT architecture
is designed to enable IoT gateways to perform IoT processing dynamically at

26
the edge of the network based on the current state of network resources (paper
IV). FL applications were deployed in the distributed SDN-based architecture,
using the gateways to provide distributed intelligence at the edge of the net-
work (paper V). In paper V, FL was deployed in a 5G environment to provide
distributed intelligence at the edge of the network using the O-RAN architec-
ture. The O-RAN architecture allows the decoupling of the network functions
from the hardware. This feature of the O-RAN can facilitate the optimization
of distributed intelligence in mobile networks. Lastly, in paper VI, the exact
design of the SDN controller in paper V was used to optimize network com-
munication in the O-RAN architecture by slicing the resources for different
applications.

3.6 Demonstration of the Artifact


In today’s fast-paced and ever-evolving world, it is crucial for researchers to
develop and test new methods and technologies that can contribute to solving
real-world problems. The spread of misinformation and fake news, as dis-
cussed earlier, is just one example of the many challenges that society faces
today. Therefore, the demonstration of the reliability and feasibility of a new
research artifact is essential to determining its potential value to the wider re-
search community and society. In this particular demonstration, an in-depth
performance analysis of the designed artifact using a real testbed has been
performed. The artifact has been developed through multiple stages and docu-
mented in various papers, providing evidence of its novelty and usefulness.
The testbed has been tested and demonstrated for several different scenar-
ios, with the reliability of the testbed being a key focus. The delay, jitter, and
packet loss ratio performance of the developed model have been tested to en-
sure that it does not introduce unnecessary delay or jitter. This is essential to
ensuring that the model can be used in practical applications without compro-
mising its effectiveness.
Furthermore, the artifact’s performance has been demonstrated for several
scenarios using various numbers of nodes per cluster and various rates of data
flow. This has provided evidence of the artifact’s versatility and adaptability to
different real-world scenarios.
To validate the correctness and feasibility of the developed artifact, var-
ious simulation scenarios have been compared. This comparison has helped
to generalize the results and reflect probable real-world scenarios. This has
demonstrated the potential practicality and usefulness of the developed arti-
fact.
In this study, a testbed was utilized to conduct experiments and validate
the approach. While the aim was to investigate the scalability of the solution,

27
resource constraints limited the ability to explore this aspect fully. To address
this limitation, there is a need for further research and experimentation under
varied conditions and with larger-scale datasets. It is important to note that the
findings presented in this study provide a strong foundation for future work
related to assessing the scalability of the approach, as well as its applicability
to diverse, large-scale scenarios.
Overall, this demonstration serves as an excellent example of the impor-
tance of testing and validating new research methods and technologies. It em-
phasizes the need for researchers to develop artifacts that can contribute to
solving real-world problems and provide practical solutions. By showcasing
the reliability and feasibility of the developed artifact, evidence of its potential
value to the wider research community and society has been provided. This
demonstration is a significant step towards creating a better and more con-
nected world through innovative research and development.

3.7 Evaluation of the Artifact


The purpose of the demonstration is to show that the study is both innovative
and useful. A performance analysis was conducted using a real testbed devel-
oped through several steps to demonstrate the designed artifact, as described
in various papers. The reliability of the testbed was tested for several differ-
ent scenarios; the delay, latency, and accuracy were measured. The developed
model should not introduce unnecessary delays or latency. Its performance is
demonstrated for several scenarios using various numbers of nodes per cluster
and various data flow rates. To verify the correctness and feasibility of the
developed artifacts, they were compared using various simulation scenarios.
This comparison helps to determine whether the developed artifact contributes
to the research community and can solve real-world problems.
The experimental setup used in this study is based on previous research de-
scribed in the previous chapter and in various papers (papers I–VI) that discuss
the respective artifacts. During the demonstration stage, the model is evaluated
against quantitative data generated through experiments. Simulations are used
to generate new data for empirical systems, allowing for faster and more cost-
effective scientific inquiry into computer-based models. For instance, if the
temperature sensor values of a healthcare system need to be observed, it may
take a long time to gather data for a specific season. However, simulating
the expected values of the healthcare system can provide new data about the
developed model for probable scenarios.
It is important to note that the evaluation of the artifact is not dependent
on a single platform, and different approaches are evaluated through different
publications. Each publication compares a particular approach’s evaluation to

28
Design Science Steps
RQ
Outline artifact and Design and develop
Explicate problem Demonstrate artifact Evaluate artifact
define requirements artifact
Requirement analysis Experiment
RQ1 Observation and exploration Generate Response time
through literature review (simulation)
Requirement analysis Experiment RMSE
RQ2 Observation and exploration Search and select
through literature review (simulation) MAPE
Delay
Requirement analysis Experiment
RQ3 Observation and exploration Generate Response time
through literature review (testbed)
Throughput
Delay
Requirement analysis Experiment
RQ4 Observation and exploration Generate Response time
through literature review (testbed)
Throughput

Table 3.1: Methods used in each step of the design science methodology.

past and current approaches, when applicable. These comparisons and results
provide evidence of any improvements or standard behavior, allowing for con-
clusions about the developed model to be drawn. To evaluate the developed ar-
tifact in terms of the communication performance, the following metrics have
been used: the delay, response time, throughput, processing time, and network
convergence. These five parameters are considered to be significant for validat-
ing the achievements of the artifact in the IoT and network. For the evaluation
of the artifact related to the reasoner, the following metrics have been used: the
accuracy, root mean square error (RMSE), and mean absolute percentage error
(MAPE).
Table 3.1 summarizes the methods that have been used in each step of the
design science methodology.

3.8 Research Ethics


This body of research, grounded in the domain of the IoT, primarily focuses
on the introduction and enhancement of distributed intelligence within IoT sys-
tems, with the goal of maximizing efficiency and minimizing the delay in these
systems. This extensive work has been segmented into six research papers,
with each targeting a unique aspect of distributed intelligence for IoT systems.
None of the research segments directly involve human subjects, nor do they
incorporate personal data from external sources. The majority of the data uti-
lized in this research were generated through computational simulations and
experiments conducted on personal computing devices, such as PCs or Rasp-
berry Pi units. For a small fraction of the research, publicly accessible bench-
mark datasets that are available online were used. However, ethical issues arise
when IoT use cases are deployed in human society [76]. Given the potential
real-world applications of this research, such as smart home applications, pri-
vacy concerns are a significant aspect to address. Although this research does

29
not directly deal with personal data, it is known that IoT systems could lead
to the collection and processing of personal data. Hence, this research aims
to design methods that inherently respect user privacy, promote transparency,
and offer control over data usage. The distributed intelligence that is proposed
is meant to keep most of the data at the edge (i.e., close to where the data are
generated), minimizing the need for data transmission and thereby enhancing
user privacy. What is shared with the cloud primarily consists of model pa-
rameters and public data, which drastically reduces the potential exposure of
sensitive information. Furthermore, the studies on communication overlays
also consider user privacy, particularly when dealing with potentially sensitive
data. On the other hand, there is still a lack of proper ethics governing how
these data can be collected without violating people’s privacy. Thus, future
IoT devices and services will require an ethical design that provides users with
different ethical options within the digital platform [77]. In other words, IoT
users will have complete freedom to make their own ethical choices while in-
teracting with IoT devices. All ethical options and choices will be embedded in
the algorithms that programmers and developers create. These choices will in-
clude different degrees of privacy and data protection so that users can choose
the options that are best for their purposes [78]. According to Pollard [79], IoT
devices with an ethical design should have the following characteristics:

1. The ability to manage and control the collection and distribution of per-
sonal data or services.

2. The ability to apply different rules and policies regardless of time and
space.

3. The ability to support dynamic contexts such as the home and office.

4. The ability to observe, recognize, and support relationships that require


ethical options.

While the current study ambitiously strives to incorporate these attributes,


the practical realization of the first and fourth characteristics is beyond the
scope of the present study. The general strategy, emphasized throughout this
research, is to refrain from transferring data outside of the client device to
uphold the sanctity of privacy.

3.9 Summary
In this chapter, the methodological approach used in this study was presented,
along with the philosophical assumptions that underpin it. The chapter began

30
by discussing how these assumptions can guide the acquisition of the knowl-
edge needed to answer the research question. The research method used to
obtain this knowledge was then presented, along with a justification for why
this particular method was chosen. The chapter also discussed how comparing
this method with other methods can validate the research question.
Next, the chapter outlined the steps involved in the research process. It
provided an overview of each of these steps and explained how they contribute
to the overall goal of the study. Additionally, the chapter briefly touched on
the ethical considerations involved in this research.

31
32
4. Summary of Papers

This chapter gives an overview of the papers included in this thesis [80–85].
Section 4.1 describes the two-level architecture developed to implement dis-
tributed intelligence in IoT systems, along with its contributions and main
findings. Section 4.2 presents an empirical approach that examines the use
of enabling technologies such as SDN and the O-RAN to address the commu-
nication challenges of distributed intelligence in IoT systems. Lastly, Section
4.3 presents the results of empirical investigations that assess the deployment
of the proposed two-level architecture in the 5G environment.

4.1 Two-Level Intelligence


In this section, a two-level architecture is designed to use edge computing to
enable distributed intelligence to provide the intelligence of things by reaping
the information of things closer to the devices. This is in response to the first
research question of this thesis; the aim is to provide a novel and generalizable
distributed intelligence architecture that is capable of exploiting heterogeneous
and constrained IoT devices. This section also considers the study’s second
question of finding a suitable reasoner for two-level distributed intelligence
and an efficient way of applying it in the architecture through an IoT gateway.

4.1.1 Motivation
As mentioned in the previous chapters, the number of IoT-connected devices
is expected to be in the billions in the near future. However, this also presents
new challenges, such as the need for comprehensive middleware solutions ca-
pable of addressing all aspects of the IoT and complex controllers to manage
the huge number of components and ensure stability and robustness. This is
where the concept of edge computing comes into play, allowing the use of
the distributed computational capacity closer to the devices, which can help
reduce latency, optimize network bandwidth, and offload the burden from the
cloud to the edge. To demonstrate the feasibility of this approach, a novel
two-level intelligence architecture is proposed that provides low-level intel-
ligence closer to the devices while leveraging fuzzy logic control to provide

33
Figure 4.1: The low-level intelligent controller scheme [80].

low-level intelligence (i.e., edge intelligence) based on small amounts of data


at the edge before providing high-level intelligence in the cloud. Furthermore,
the proposed implementation of an IoT gateway with a fuzzy logic controller
as a reasoner module learns and predicts the desired temperature for a cot-
tage based on the location of the owner and data from sensors in the cottage.
The results show that the proposed implementation has a higher accuracy in
comparison to a simple rule-based reasoner. This demonstrates the potential
of using edge computing and fuzzy logic control to address the challenges of
the IoT and enable distributed intelligence, which can be applied in various
industries beyond smart homes.

4.1.2 Study Design


Architecture

In papers I and II, the focus was on what architecture could enable distributed
intelligence in IoT systems. Therefore, we proposed a low-level intelligence
architecture according to the challenges and needs mentioned in previous chap-
ters and in papers I and II. As shown in Fig. 4.2, the cloud would provide high-
level intelligence, while the low-level intelligence consists of three planes: the

34
Figure 4.2: Distributed intelligent gateway controller in the distributed IoT [80].

intelligence plane, the SDN control and context awareness plane, and the for-
warding plane. The intelligence plane provides a mechanism of learning that
allows the controller’s IoT gateway to manage input uncertainties. The SDN
control and context-awareness plane is responsible for monitoring the com-
munication between the applications and services in the cloud and the user
devices, enabling the IoT gateways to be dynamically managed via real-time
requests and statuses; this plane should provide the ability to respond to any
context with context-aware reasoning. The forwarding plane is responsible
for forwarding data to user devices or wireless sensor networks (WSNs). To
implement low-level intelligence at the edge of the network, we proposed a de-
sign for an IoT gateway consisting of two main modules: application services
and the connected software element, as shown in Fig. 4.1. The application ser-
vices handle data storage, management, and connectivity to sensors and appli-
cations, while the connected software element provides the software elements
required for the gateway. Paper II focuses mainly on the reasoner module in
great detail. This paper proposes an updated design for the IoT controller, as
shown in Fig. 4.3.

35
Figure 4.3: The low-level intelligent control scheme [81].

We presented two key modules in our proposed IoT gateway design: appli-
cation services and the connected software element. The application services
module is responsible for data storage and management, as well as connectiv-
ity to sensors and applications, and comprises device management, protocol
adapter, and Cassandra database components. The device manager and proto-
col adapter function as the SDN control plane and forwarding plane by con-
trolling the data flow between the reasoner and the relevant application. The
connected software element provides the necessary software components for
the gateway.

Reasoner

A large number of small datasets are handled by the IoT gateway, where
these small datasets must immediately transform into actionable information
for making technical decisions. Decision-making is important for providing
context-based services in order to gain distributed intelligence in the IoT. Rea-
soning, the most important part of decision-making, is about drawing conclu-
sions and deducing new facts that do not exist in the knowledge base. The

36
Figure 4.4: Fuzzy interface system.

primary role of a reasoner is to reason based on the data fed into it. To de-
sign a reasoning service for context-awareness applications, which respond to
context changes in the network, in the first paper, we designed a fuzzy logic
controller (FLC) to address context changes. Since the FLC, like human logic,
has no limits and is based on decision-making methods, control of the opera-
tion is needed to make a better decision. The need for operation control has, in
turn, led to the use of an FLC mechanism, as shown in Fig. 4.4. To make the
reasoning engine an optimal predictor of the uncertainty factor of data, which
allows reasoning selection to reduce the inference time of the sharing process,
a generic enabler method is employed to map an input to an output by using
interval type-2 logic.
In the second paper, an intelligent approach using fuzzy logic is proposed
to efficiently adjust the use of electricity from the main grid, renewable re-
sources, and a battery to reduce the use of electricity from the main grid and
maximize the level of the battery at times when the price of electricity is high-
est. The structure of the FLC shown in Fig. 4.5 is simple, as it provides both
a low computational complexity and straightforward implementation on real
devices. The FLC requires a lower computational complexity compared to
other methods. The ease of implementation of the FLC mainly depends on the
number of rules that are defined. The definition of the rule process becomes
difficult when a large number of rules are required. Several methods have been
developed to reduce the burden of the process of manual rule definition. In this
work, a combination and weighted method has been used to automate the rule-
generation process. In this method, the output is determined by a score that is
used to calculate which each input parameter is combined with two weights.
The main steps of the FLC are as follows. In the first step, the input parame-

37
Figure 4.5: The proposed FLC architecture.

ters are assigned to the membership functions. In the second step, to prevent
the cumulative growth of if-then rules, a parameter called the score is used to
determine the required energy level. A low score means low energy consump-
tion, and vice versa. The score is calculated as the weighted sum of the weights
assigned to the membership function of the variables involved. The first weight
depends on the membership function, where low, medium, and high member-
ship functions have weights of 0, 1, and 2, respectively. The second weight
depends on the type of value, where the energy consumption, battery level,
and price rate have weights of 1, 2, and 3, respectively. In the automatic rule
generation method (Algorithm 1), the order of the input parameters does not
matter. It is worth mentioning that the commutative property should be used
when calculating the score. The score at any point in time can be calculated as
follows:
3
Score = ∑ w1 (vi ) ∗ w2 (vi ). (4.1)
i=1
In the third step, we used Mamdani as the interface engine. The last step
is defuzzification, where the output of the interface engine, which is a fuzzy
value, is converted into a crisp number.
In the third paper, we proposed a federated learning (FL) system archi-
tecture and overall procedure to allow multiple gateways to control their own
IoT devices. The proposed system consists of one cluster’s head node and N
nodes. Each node has its own independent environment (i.e., IoT device) and
trains its actor and critic models to control the environment optimally through
its algorithm. On the other hand, the cluster’s head node mediates the feder-
ated work across the N nodes and ensures that each node’s learning process

38
Algorithm 1 Automatic Rule Generator
EnergyConsumption ← {L, M, H}
Prates ← {L, M, H}
levelO f Battery ← {L, M, H}
for EnergyConsumption[1] to EnergyConsumption[n] do
for levelO f Battery[1] to levelO f Battery[n] do
for Prates [1] to Prates [n] do
Compute Score using Equation (4.1)
if Score <= 7 then
RE = L
else if Score => 7 or Score <= 15 then
RE = M
else if Score => 16 then
RE = H
end if
end for
end for
end for

is synchronized. We describe the details of the algorithms used in the cluster


head node (master node) and worker nodes. The deep learning global model
is initialized randomly or pre-trained with publicly available data. This ini-
tialized model is the same for all worker nodes since the conditions must be
the same for all worker nodes, and only the data used for training are different
in each node. The server sends a global model to each worker node in each
round. Then, worker nodes perform a training epoch, compute new weights,
and send them to the master node—each node then uses SGD to compute the
average gradient. In this phase, the master node takes all the weights from
the worker nodes. According to the extreme Studentized deviate (ESD) test,
the outliers’ weights are determined, and they are not considered in the cal-
culations of this round. The master node then uses the FedAvg algorithm to
calculate the averages of the remaining weights and sends them to the worker
nodes for the next round. This process is repeated until the model converges.
The centralized model does not match all node data; therefore, personalization
is a proposed solution to this problem. Personalization is at the heart of many
applications involving understanding and matching node behavior. It consists
of retraining the centralized model to create a customized model for each node
using node-specific data. This can be done by retraining the model locally for
a limited number of epochs using only the node data.

39
Figure 4.6: Edge and cloud reasoning response time.

4.1.3 Main Findings

For paper I, the main results come from a performance comparison between
the proposed and centralized approaches. To evaluate the proposed approach’s
performance, we measured and compared the latency of reasoning within the
activation requested at both the cloud and edge (gateway) based on the queries
per second (QPS). In the context of networking and computing, latency refers
to the total time it takes for a data packet to travel from one designated point
to another. This encompasses both the network latency, which is the delay in-
troduced as the packet travels across the network, and the processing latency,
which is the delay incurred by various processing tasks such as routing, buffer-
ing, and transcoding within network nodes. Thus, when we refer to latency,
we are essentially referring to the aggregate of these network and processing
delays. As shown in Fig. 4.6, the reasoning at the edge has shorter response
times and a lower latency than cloud computing. When the number of QPS
is increased from 10 to 40, the response time linearly increases for both the
edge and cloud to 200 ms and 900 ms, respectively. The response time starts
to increase exponentially after 24 QPS; the cloud has a response time of more
than 4000 ms at 60 QPS. Considering that 1 second is an acceptable latency,
we can see that the gateway can support at most 56 QPS, and the cloud can
handle 41 QPS. Fig. 4.7 shows the same trend as Fig. 4.6. When the number
of edge devices is increased to 60, the latency quickly increases to about 1200
ms. Consequently, 50–56 edge devices are the maximum loads for a gateway.
These preliminary results show that providing low latency and a low level of
knowledge to reduce dependency on the cloud is possible at the IoT’s edge.
In paper II, we assumed that there are four bandwidths, (5, 10, 15, 20) × 104
bits per second, for uploading service demand responses. The transmission

40
Figure 4.7: Scalability in terms of the response time.

distances of bandwidths are different. The transmission distance of service de-


mand responses that the grid architecture with a cloud center requires is much
longer than that of the architecture in which edge computing is deployed. The
numerical simulation gives the hierarchical response results of a large number
of accessed devices. The results of the simulation are depicted in Fig. 4.8 and
Fig. 4.9, which primarily illustrate three points:

1. In the traditional cloud-based architecture, the transmission bandwidth


required rises as the number of devices increases compared with the IoT
gateway.

2. The edge computing architecture’s transmission bandwidth requirement


is always smaller than that of the cloud architecture.

3. The latency and bandwidth are two outstanding quality-of-service (QoS)


indicators. It can be seen from the results of the numerical simulation of
the bandwidth and latency that the QoS performance of the edge com-
puting architecture is much better than that of the traditional cloud-based
architecture.

4.2 Two-Level Communication Mechanism


In this section, two mechanisms are designed to mitigate communication chal-
lenges in edge computing to enable distributed intelligence. This is in response
to this thesis’s third and fourth research questions, which are related to provid-
ing a better communication overlay for the proposed distributed intelligence
architecture.

41
Figure 4.8: Latency.

Figure 4.9: Required bandwidth.

4.2.1 Motivation

Network management is one of the most critical challenges in realizing tar-


geted pervasive scenarios. Most network management tools focus on devices
rather than the service or application, so there is no real way to correlate what
the user is experiencing with the conditions in the network. SDN is a network
management tool that can hide the complexity of this heterogeneous environ-
ment from the end user and particularly from developers. SDN is an outstand-
ing approach that enables the programming of network devices by separating
the control plane from the data plane. It simplifies the management of the
network by letting the user define the flows of the network without requiring
upgrades to the devices’ firmware. Decoupling the control plane from the data
plane increases control over the network and enables the more efficient use of

42
the available resources. The advantage of this is that SDN can mitigate the
complexity of edge computing. Since most existing systems extend traditional
cloud data centers to edge servers, the SDN control mechanism can dynami-
cally manage the traffic originating at the edge and provide high-quality ser-
vices to users. Edge devices are simple and constrained, meaning that they
cannot perform complex activities such as orchestration and service discov-
ery. SDN can alleviate this problem with edge devices by collecting network
information via a software-based controller. In other words, SDN manages a
multilayer edge computing infrastructure that meets the QoS requirements of
IoT applications, such as performance and delay requirements, and can im-
prove user satisfaction.

4.2.2 Study Design


In paper IV, the architecture abstracts the system model, as shown in Fig. 1. In
this proposed model, distributed SDN is integrated into the two-tier architec-
ture proposed in the first paper. The architecture consists of high-level intelli-
gence and lower-level intelligence. The lower level of intelligence operates at
the edge of the network, and the high level operates in the cloud. As shown in
Fig. 4.10, in order to implement two-level intelligence, four planes are used:
the application, control, data, and M2M device layers. The last three layers op-
erate at the edge of the network. The application layer contains typical network
applications or functions, such as intrusion detection systems, load balancing
applications, or firewalls. A traditional network uses a specialized appliance
like a firewall or load balancer. In contrast, an SDN replaces this type of appli-
ance with an application that uses the controller to manage the data plane be-
havior. Additionally, the application layer contains business applications. The
control plane is part of a network that controls how data packets are forwarded,
which refers to how data are sent from one place to another. In our proposed
model, this plane is distributed, and each controller has a partial view of the
network. After they have been synchronized, all of the controllers manage
the data plane together. Network devices such as switches and routers usually
implement the data plane. The aim of this plane is to forward data based on
control plane instructions. In the proposed model, several new functions have
been added to the functions of this plane. In addition to the typical network
components, the IoT gateway has also been used to implement this plane. In
the proposed model, the data plane has five modules: the forwarding engine,
processing engine, node management, data storage and caching, and protocol
converter modules. The forwarding engine is responsible for forwarding data
based on control plane instructions that are implemented by network switches.
The processing engine implements distributed intelligence, which in this paper

43
Figure 4.10: The architecture of the data and control planes.

is in the form of FL. This module is located inside the IoT gateway. In Fig.
4.11, an illustration of the proposed model implemented within the IoT archi-
tecture is shown.
In paper V, we introduced an O-RAN-based 5G architecture to implement the
three-layer client-edge-cloud architecture. We proposed an architecture con-
sisting of two parts, a high-level part and a low-level part. The low-level part
of the architecture focuses more on the network overlay. We propose imple-

44
Figure 4.11: SDN-IoT integration.

menting a distributed intelligence network overlay using the 5G and O-RAN


components. The high-level part of the architecture focuses more on the soft-
ware components needed to create the distributed intelligence system scattered
across the network overlay (discussed in more detail in Section 4.3). For the
low-level architecture, we employed the O-RAN as the underlying architec-
ture; it includes multiple network functions and multiple slices for downlink
transmission. Moreover, we take advantage of the two RAN intelligent con-
trollers (RICs): the near-real-time RIC (near-RT RIC) and non-real-time RIC
(non-RT RIC). The near-RT RIC is suitable for operations that can tolerate less
than 1 second of latency, such as hosting third-party applications (xApps) that
communicate with the centralized unit (CU) via standard open interfaces; intel-
ligence can be implemented in the RAN through data-driven control loops. The
non-RT RIC is suitable for operations that can tolerate a latency of more than
1 second, such as training AI and ML models. The O-RAN system includes
other components in addition to the RICs, including the radio unit (RU), the
distributed unit (DU), and the centralized unit (CU). The CU itself consists of
a user plan (UP) and a control plan (CP). In 5G, three primary use-case classes
are introduced in terms of different levels of the QoS, namely ultra-reliable and
low-latency communications (uRLLC), extreme mobile broadband (eMBB),
and massive machine-based communication (mMTC) [23]. As shown in Fig.
4.12, there are three dedicated slices for each of the above classes, and each
slice may contain several pieces of user equipment (UE) with similar QoS re-
quirements. Through the E2 interface, the data related to the slice operation
are gathered and stored in distributed databases. The O1 interface is used to
transfer the data to near-RT RICs, and the A1 interface can be used to trans-
fer the data to the non-RT RICs. The FL process needs to be improved, but

45
Figure 4.12: O-RAN setup for the underlying architecture.

iteratively training the model involves dealing with the challenges of a large
learning time and latency. Thus, the objective of our proposed reinforcement
learning (RL) method is to reduce the learning time while maintaining the high
accuracy of the global model.
In paper VI, we leveraged the benefits of O-RAN principles (paper V)
and SDN (paper IV) and proposed a two-level resource allocation mechanism
to address the aforementioned challenges of the IoT in terms of latency, as
shown in Fig. 4.13. The SDN controller designates a particular amount of re-
source blocks (RBs) from a shared pool to each gNodeB based on its individual
needs. Following this allocation, the gNodeBs expedite the scheduling of these
RBs to their respective end users, ensuring adherence to their QoS demands.
This method minimizes the frequency of communication required between gN-
odeBs and the SDN controller. It fosters a quick response to end-user needs by
permitting gNodeBs to procure additional RBs from other gNodeBs. The is-
sue is explored via mathematical programming methodologies, and a solution
utilizing a hierarchical reinforcement learning (HRL) framework is proposed.
The problem is portrayed as a Markov decision process (MDP) at both tiers and
addressed using HRL. At the higher level, an SDN controller utilizes an agent,
supported by a double deep Q-network (DDQN), for the allocation of radio
resources to gNodeBs that align with the demands of eMBB and URLLC ser-
vices. On the other hand, at the lower level, each gNodeB employs an agent,
similarly trained by a DDQN, to distribute its pre-allocated resources to its
end users. The suggested solution displays dynamism, adapting to a range of
factors such as the end-user density, service requirements, and transmission
conditions.

46
This study is focused on developing a model that reduces delay in IoT
systems using the O-RAN architecture, comprised of four key layers: the end-
device layer, edge layer, core network, and application layer.
The end-device layer connects the devices of the end users, which represent
a mixture of IoT (URLLC) and eMBB types, each served by a single DU within
a gNodeB. The edge layer, which contains gNodeBs supporting the O-RAN
architecture, manages resources and provides network access. It includes the
following sublayers: the RU, the O-RAN network functions layer, and the O-
RAN control and management layer. All of these sublayers operate together to
provide effective resource management and seamless connectivity.
The core network, acting as the system’s backbone, connects the edge layer
to the application layer and handles routing, network management, and slicing
operations. It uses an SDN controller for resource management and coordina-
tion, optimizing the network performance by dynamically adjusting to traffic
patterns and service demands.
The application layer offers various services to end users with diverse QoS
requirements, leveraging resources provided by the lower layers.
Network slicing is implemented at multiple levels to minimize the average
latency for IoT systems. The SDN controller at the core network level allo-
cates RBs to gNodeBs, while at the edge layer, each gNodeB assigns these
pre-allocated RBs to its users. This dual-level approach promotes efficient re-
source allocation, enhancing the IoT system performance. In this study, a com-
prehensive mathematical model has been developed with the primary objective
of minimizing delay, particularly for IoT devices. This model is designed to
ensure a minimum data rate for both slices while also enforcing a delay thresh-
old. This means that the delay experienced should not surpass this predefined
threshold.
Furthermore, the model takes into account resource availability, ensuring
that the sum of assigned RBs does not exceed the total number of available
RBs at each level. Another important constraint is that each RB should be
assigned exclusively to one end device, thus avoiding potential conflicts and
optimizing resource utilization.
To effectively handle this multi-tiered problem, an HRL framework is im-
plemented that features an MDP at both levels. The MDP is a mathematical
framework for modeling decision-making situations in which outcomes are
partly random and partly under the control of a decision-maker. By applying
an MDP at both levels, the model is capable of capturing the dynamic nature
of the network environment and making optimal decisions about resource allo-
cation in real time, enhancing the performance and responsiveness of IoT sys-
tems across various applications. In this paper, we choose to focus on eMBB
and URLLC slices when studying 5G network slicing for several reasons:

47
• Distinct requirements: eMBB and URLLC represent two extremes in
terms of service requirements. eMBB is focused on providing a high
bandwidth, while URLLC is more concerned with achieving ultra-reliable
and low-latency communication. This makes the problem of managing
and optimizing these two types of slices a challenging and interesting
research topic.
• Application scope: eMBB and URLLC cater to a wide range of applica-
tions (like high-quality video streaming, autonomous driving, Industry
4.0, and telemedicine), many of which are considered critical for a fu-
ture digital society. On the other hand, mMTC, while also important,
often involves lower data rates and less stringent latency requirements,
and therefore, it might be perceived as less challenging from a network
design and optimization perspective.
• Practical considerations: From a practical standpoint, many applications
that require mMTC are still emerging and are less mature than those
requiring eMBB and URLLC. Therefore, it might be more relevant for
researchers to focus on the latter two categories. It is important to note
that this does not mean that mMTC is not important or not being studied.
However, the urgency of and interest in solving the challenges presented
by eMBB and URLLC services often make them a focus of research in
5G network slicing.

4.2.3 Main Findings


In paper IV, the proposed distributed SDN-based IoT architecture is validated
using two key network features: the response time and throughput.

Figure 4.14 shows that the proposed architecture has a better response time
for many requests than centralized IoT-SDN and traditional networks with-
out SDN. For a small number of requests, the proposed architecture performs
worse than centralized SDN and a network without SDN since it takes some
time for distributed SDN controllers to become synecious. Fig. 4.15 clearly
shows that the distributed SDN throughput is far better than that of centralized
SDN and networks without SDN. The reason behind this is that, first, SDN
helps reduce packet loss and accelerate data transmission by utilizing the dy-
namic flow table; second, in distributed SDN, the network burden is scattered
over multiple controllers instead of a single controller. Applying distributed
SDN leads to the network being managed more efficiently.
In paper V, the proposed FL approach was evaluated using two versions of the
VGG neural network architecture, VGG-5 and VGG-8. The CIFAR10 dataset

48
Figure 4.13: IoT system model with two-level resource allocation.

is used for the experiments, and non-IID data are simulated using four different
classes. The performance metrics evaluated are the number of communication
rounds and accuracy. The proposed approach is compared with the FedAvg al-
gorithm, and the results show that the proposed approach outperforms FedAvg
in terms of accuracy for all levels of non-IID data. The proposed approach
can be trained using shallower networks first and then deployed to train deeper
networks.

In paper VI, we employ three contrasting comparative strategies to illus-

49
Figure 4.14: Response time versus the number of packet-in messages.

Figure 4.15: Throughput for various numbers of controllers.

trate the effectiveness and superiority of our suggested approach.


• Baseline Method: Our first strategy is straightforward and serves as a
baseline. Here, the allocation of RBs to UE is executed in a random
fashion, signifying an arbitrary RB distribution. It’s paramount to un-
derstand that within this seemingly random assignment, we adhere to
the constraint that each RB is uniquely allocated and does not serve
more than one UE concurrently. Moreover, the association of the O-
RU is grounded in the principle of proximity, meaning that each UE is
tethered to the O-RU situated closest to its geographic location.

• Dynamic Resource Allocation (DRA): Our secondary approach is de-


rived from the Dynamic Resource Allocation algorithm, as outlined in[32].
Within the confines of our research, we term this strategy as the Dy-

50
namic Resource Allocation approach (DRA). Veering away from the
traditional reliance on the BBU, commonly associated with the C-RAN,
our methodology accentuates the roles of the O-DU and O-CU compo-
nents which are intrinsic to the O-RAN paradigm. It’s crucial to note
that this shift in architectural focus doesn’t impede the comparisons in
our study, given that the capacity of the O-DU and O-CU units is outside
the ambit of our current discussion.

• Two-Time-Scales Slicing Mechanism (SAMA-RL): As a third approach,


we draw upon the mechanism delineated in [17]. The authors of this
study champion a two-time-scales slicing mechanism, specifically tai-
lored to boost the efficiency of URLLC and eMBB services within 5G
ecosystems. They incorporate a large time-scale SDN controller, strate-
gically placed at the network’s core, to apportion radio resources to gN-
odeBs based on the nuanced needs of varying services. On the flip side,
a shorter time scale is adopted by each gNodeB to disburse resources to
end-users, with an in-built mechanism to procure additional resources
from neighboring gNodeBs when the situation demands. The intricacies
of this approach are captured through a non-linear binary program and
further modeled as an MDP with dual time scales.

Figures 4.16 and 4.17 display a comparison of the total delay experienced
by URLLC and eMBB service users as the user count increases for four dif-
ferent methodologies. As the figures demonstrate, our proposed method out-
performs the other three methods in terms of reducing the total delay time. As
expected, as the number of end users increases, so does the delay time due
to an increased demand for resources, which could lead to network conges-
tion and thus, an increase in the total delay. However, our findings show that
despite this increase, the average delay for both URLLC and eMBB services
stays below the set maximum delay threshold. Even under high demand, the
network service’s performance is effectively managed so that it remains within
acceptable boundaries.
Figures 4.18 and 4.19 depict the collective throughput versus the num-
ber of URLLC and eMBB service end users. The figures present a compari-
son between various methodologies, with our proposed algorithm clearly out-
performing the baseline and DRA approaches. This superior performance is
maintained across the whole range of end users, indicating the robustness and
scalability of our algorithm. Moreover, the algorithm maintains high through-
put rates even when the number of end users is substantial, underlining the
practical resilience of our approach. A nuanced observation is that our pro-
posed method shows slightly improved throughput for both services compared
to SAMA-RL. This difference is rooted in the contrasting objectives of the two

51
Figure 4.16: Total delay of URLLC services with respect to the number of end
users.

methods. While SAMA-RL is designed with an emphasis on maximizing data


rate, our methodology is primarily centered on minimizing delay. This focus
on delay minimization lends our approach a potential edge in terms of through-
put. However, it is important to clarify that this does not categorically render
our method superior to SAMA-RL across all metrics and scenarios. The dis-
tinct goals of each approach mean they might excel under different conditions
or be more suited for particular applications.
These figures also reinforce that our proposed algorithm is capable of con-
verging towards a globally optimal solution, indicating that it provides the most
efficient resource allocation for maximum throughput. This suggests that our
approach is skilled at understanding the complexities of the problem and iden-
tifying and reaching the optimal configuration.

4.3 Distributed Intelligence for Distributed IoT


The previous two sections discussed the two-tier architecture used to provide
distributed intelligence in IoT systems and distributed communication mecha-
nisms. This section explains the overall system, which can optimally provide
distributed intelligence by combining a two-tier architecture and distributed
communication mechanisms.

4.3.1 Motivation
The IoT is a rapidly growing field that involves the integration of devices and
sensors into everyday objects to collect and analyze data. These data can be
used to optimize processes, improve efficiency, and provide new services to

52
Figure 4.17: Total delay of eMBB services with respect to the number of end
users.

Figure 4.18: Aggregated throughput of URLLC services with respect to the


number of end users.

users. However, as the number of IoT devices and sensors continues to grow, it
becomes increasingly challenging to process and analyze the data generated by
these devices. A distributed intelligence approach can be used to address this
challenge; in this approach, decision-making and data analysis are distributed
across multiple devices and servers. A two-tier architecture and distributed
communication mechanisms are two approaches that can be used to imple-
ment this distributed intelligence. However, to optimize the overall system, it
is important to design an architecture that combines both approaches to maxi-
mize their benefits. To provide efficient and scalable solutions for processing
and analyzing IoT data, this section proposes an approach combining a two-
tier architecture and distributed communication mechanisms. The proposed

53
Figure 4.19: Aggregated throughput of eMBB services with respect to the num-
ber of end users.

system architecture can be designed to optimally utilize devices’ processing


power and decision-making capabilities at the edge layer while also providing
additional processing power and storage capacity in the cloud layer. The use
of distributed communication mechanisms allows for efficient communication
between devices and servers by optimizing resource allocation.

4.3.2 Study Design


The primary objective of paper III was to critically examine the utility of edge
computing in combination with the FL approach to address the short-term load
forecasting (STLF) challenge within microgrid energy management systems
(EMSs). The term “edge computing" is used to describe the concept of con-
ducting data processing closer to the source of the data, that is, at the network’s
edge, as opposed to traditional cloud computing or remote server processing,
which happens far from the data source.
In this research, we leverage the long short-term memory (LSTM) net-
work, a sophisticated variant of deep learning neural networks that was specif-
ically designed to predict time series data. The LSTM model is trained to use
the historical observations of the microgrid’s electrical load to forecast future
load values accurately.
Our study makes several key contributions to the field of microgrid energy
management. First, we propose an innovative architecture that enables the
implementation of FL using edge equipment within the framework of the smart
grid. This novel architecture seeks to leverage the benefits of edge computing,
such as reduced latency and improved data privacy, and combine them with the
strengths of FL, like improved model performance and enhanced data security.

54
Second, we conduct a series of simulations to evaluate the potential ben-
efits of adopting FL in the context of load forecasting. We particularly focus
on measuring the improvements in accuracy that can be achieved by deploying
our proposed edge-enabled FL approach.
By integrating edge computing with FL to overcome the STLF challenge,
this paper seeks to push the boundaries of what is achievable in microgrid
EMSs. Our work thus represents a significant stride towards more efficient,
secure, and accurate load forecasting in smart grids, paving the way for more
sustainable and reliable energy management.
Finally, this paper incorporates a unique approach to managing the chal-
lenges associated with FL. One of the salient strategies we have employed
involves the utilization of an outlier detection mechanism at the server level.
The primary function of this detector is to identify and subsequently eliminate
weights that are derived from noisy or inconsistent data. This is a critical mea-
sure that ensures the quality and accuracy of the learning process, reducing the
likelihood of skewed or inaccurate models due to the integration of unreliable
data.
Additionally, to further refine the learning process, we introduced the con-
cept of personalization in our model. The need for personalization arises in
instances where there is a duplication of data or a requirement for a more gen-
eralized model. Personalization in FL entails the adjustment or tuning of the
model to suit the specific characteristics or needs of each client. It enhances
the model’s ability to cater to individual client attributes, resulting in a more
accurate and representative model that effectively captures the unique patterns
in each client’s data.
These innovative strategies, that is, outlier detection and personalization,
form an integral part of our FL approach. They contribute significantly to the
robustness and reliability of our methodology, ensuring that it not only accom-
modates the inherent variability in the data but also safeguards the model from
the adverse effects of noise or inconsistency in the data. This not only im-
proves the overall learning process but also ensures that the derived insights
and outcomes are both accurate and meaningful, laying a solid foundation for
future advancements in the field of FL.

In paper IV, as mentioned in the previous section, a distributed SDN con-


troller is proposed and designed based on the IoT system characteristics. In
order to use it as an overlay network for the FL system, we proposed a node
selection algorithm for delay-sensitive applications such as FL. Algorithm 2
utilizes contextual data from SDN controllers to determine the most efficient
route for nodes during each communication cycle. The algorithm calculates
the usage of each link (as demonstrated in line 3) through the implementation

55
Algorithm 2 Node Selection Algorithm for Delay-Sensitive Applications
Input: network topology, threshold (T), number of clients (k), links list (L).

Output: number of clients for training (K).


Before every communication round, do
for each link l in L do
LU ← Compute port utilization based on Equation (4.2)
if LU > T then
L.remove(l)
end if
end for
Topo.update(L)
for each node i in N do
Psi ← Dijkstra’s algorithm (server, n)
Cpi ← Compute the cost of Psi based on Equation (4.3)
end for
Cp.sort()
N.sort(Cp)
Select the first K nodes in N

of Equation (4.2), which may be equivalent to the usage of the associated port:

TransmittedData(bits) × 100
PortUtilization = . (4.2)
TimeInterval(s) × PortSpeed(bps)

According to [86], when the utilization of a link exceeds 70%, the network
throughput begins to decrease. Consequently, links with a utilization greater
than T = 70% are removed from the list of available links (lines 4 and 5). After
this step, the controller updates the network topology by considering the new
links list (line 10). Then, Dijkstra’s algorithm is used to calculate the shortest
path for each available node. The next step involves determining the cost of
the shortest path to the server for each node using Equation (4.3). If there is
no shortest path for a client, the cost is ∞. Based on the calculated costs, the
nodes are sorted, and the top K nodes with lower costs than the other nodes are
selected for training in the current communication round:

CPi = ∑{LU jz |link jz ∈ Pi }. (4.3)

In paper V, we introduced an optimization method for FL (distributed in-


telligence) based on edge computing in 5G to implement the three-layer client-
edge-cloud architecture, where local model parameter updates are performed

56
Figure 4.20: O-RAN setup for the underlying architecture [84].

at the client edge. Global aggregation is performed between the edge and
the cloud. Therefore, we proposed an architecture consisting of two parts,
a high-level part and a low-level part, as mentioned in the previous section.
The high-level part of the architecture focuses more on the software compo-
nents needed to create the distributed intelligence system scattered across the
network overlay. We proposed a multitask FL architecture for intelligent IoT
applications. Our proposed system leverages a cloud-edge design, as shown in
Fig. 4.17, which places essential on-demand edge processing resources near
IoT devices. To support highly efficient FL for intelligent IoT applications, we
use RL to group IoT devices based on the type and volume of their data and
latency. This method allows devices with the same characteristics to jointly
train a shared global model by aggregating locally computed models at the
edge while keeping all the sensitive data local. Thus, the requirements of IoT
applications for a high processing efficiency and low latency can be met. As
shown in Fig. 4.19, the proposed collaborative learning process involves three
main phases:

• Offloading phase: In this phase, IoT devices can either transfer their
complete learning model and data samples to the edge for rapid compu-
tation or split the model into two parts. The first part is trained locally
using the device’s data samples, while the second part is offloaded to the

57
edge for collaborative data processing. It is assumed that IoT devices
either perform the whole training model or offload it to the edge (IoT
gateway).

• Clustering phase: An RL method has been proposed in this phase to


cluster devices based on their data distribution, data size, and commu-
nication latency to capture specific device characteristics and require-
ments. Our approach involves formulating the situation as an RL prob-
lem. In the first round of FL model training, we select the client ran-
domly. However, in the second round, the RL agent analyzes the client’s
training loss and communication status to determine which client will
be chosen for training. In this subsection, we will present our problem
formulation and the design of our RL system. Our optimization goal is
to minimize the learning time while maintaining a high level of accuracy
for the global model. This goal is defined as follows:

maxa = [Acc(T ) − HT − LT ], (4.4)

where Acc(T) is the accuracy of the global model after the training pro-
cess on the test dataset, HT is the sum of the training times of all training
rounds, LT is the sum of the latencies of all training rounds, and A is the
set of actions that form the matrix for client selection.

• Learning phase: After uploading data to the IoT gateway, the local
model is trained and transmitted to the master node. The master node
averages the received local model parameters into a global model and
sends it back to the edges. This training process is repeated until the
desired accuracy is achieved, resulting in a high-quality global model.

4.3.3 Main Findings


In order to assess the efficacy of the proposed system in paper III, we con-
ducted an experiment that involved varying data distributions across worker
nodes, as shown in Fig. 4.21. It should be mentioned that despite not incorpo-
rating the fog layer in the research presented, we have intentionally included it
in the illustrated figure. The inclusion of the fog layer in the schematic depic-
tion emphasizes the flexibility and extensibility of the proposed architecture.
This not only conveys its potential to accommodate additional computational
layers but also underlines that the fog layer can be integrated into the design
should future applications or scenarios necessitate its inclusion. Therefore,
this depiction serves as an indication of architectural adaptability, signaling

58
Figure 4.21: The smart grid scenario used in paper III.

Worker node Residential EMS clients Industry EMS clients Emergency-facility EMS clients
1 20 1 1
2 50 1 2
3 80 4 2
4 50 4 2
5 80 6 3
6 150 10 5
7 60 8 3
8 100 9 5
9 40 3 5
10 70 4 4

Table 4.1: Distribution of data among nodes.

the possibility of future expansion or modification in response to changing re-


quirements or advancements in technology. The experiment was performed
three times, with each version of the experiment involving a different number
of worker nodes (4, 8, 10), to investigate the impact of increasing the number
of subsets. The FL algorithm was run for 20 rounds in each iteration. The
distribution of the client data on the worker nodes has been provided in Table
4.1. For example, node 1 has 20 residential clients, one industry client, and
one emergency-facility client.
The resulting model from this experiment incorporated two LSTM hidden
layers, each consisting of 200 neurons. We utilized the mean squared error
as our loss function and employed the Adam optimizer. The model typically
converged by the 20th epoch; hence, we used close values for the rounds and

59
RMSE MAPE
Number of worker nodes Min. Max. Mean Min. Max. Mean
4 0.039 2.63 0.562 8.59% 89.54% 37.21%
8 0.053 2.624 0.596 11.61% 93.26% 37.98%
10 0.039 2.568 0.559 10.26% 95.94% 38,26%

Table 4.2: RMSE and MAPE results for global models.

RMSE MAPE
Number of worker nodes Min. Max. Mean Min. Max. Mean
4 0.0 2.38 0.536 7.53% 97.76% 34.36%
8 0.0 2.49 0.551 7.91% 99.16% 36.29%
10 0.0 2.49 0.550 8.24% 98.23% 36.69%

Table 4.3: RMSE and MAPE results after personalization.

epochs. The data for this research were sourced from a modified version of
Pecan Street Inc’s Dataport dataset [87]. The dataset comprises circuit-level
electricity usage data collected from about 800 U.S. homes at one-minute to
one-second intervals. The data include records of photovoltaic generation and
electric vehicle charging for a certain subset of homes. For our experiment,
we extracted data from January 1 to March 31, 2019, focusing on data with
a one-hour resolution. We selected a subset of 400 clients from Texas with
similar properties and supplemented it with three additional categories: in-
dustrial consumption, emergency facility consumption, and renewable energy
resources. We then prepared the data for further analysis by scaling the values
between 0 and 1 and converting the time series into sliding windows with a
look-back of size 12 and a look-ahead of size 1. The data were then divided
into training and test subsets (90% for training and 10% for testing).
The performance of the model was evaluated using the root mean square
error (RMSE) and mean absolute percentage error (MAPE). The RMSE was
utilized to quantify the error in terms of energy, while the MAPE provided a
percentage-based measurement of the error relative to the actual value. The
resulting global models from the evaluated scenarios, derived using the FL
approach, were assessed in terms of the RMSE and MAPE. The results are
outlined in Table 4.2, which shows reasonable MAPE values for various mod-
els. Interestingly, we found that the global model is more adept at fitting some
nodes over others, reflecting the variance in EMS profiles. Larger numbers of
EMSs in each round proved preferable, although if sending updates becomes

60
more network-resource-intensive, this can be offset by increasing the number
of local training epochs.
Furthermore, we analyzed the impact of personalization on the model per-
formance. First, we tested if the local retraining of the model for participant
clients yielded improved results. We then applied the same process to non-
participant clients. Each client’s models were retrained for four epochs. The
results for the participant client set are provided in Table 4.3, and they indicate
a general improvement in most models. For instance, subset 1 demonstrated
an overall improvement of 2.85% in terms of the MAPE. Nonetheless, certain
clients, which were considered outliers, did not exhibit improved performance
despite retraining, likely due to the quality of their historical data. Applying
the models to these clients’ consumption profiles resulted in high MAPE val-
ues, skewing the average results. In paper IV, the proposed FL architecture in
an SDN-based IoT context is validated using several key network features: the
delay, response time, throughput, and processing time.
Figure 4.18 shows the variance in the delay as a function of the number of
IoT gateways (FL workers) for one round of communication. We compared a
system without SDN, systems with one and two controllers, and a system with
two controllers using the node selection algorithm, as shown in Fig. 4.14. The
results indicate that the use of SDN in the IoT is essential. The proposed dis-
tributed SDN-IoT has a lower delay than the centralized SDN-IoT. The delay
is higher in the centralized SDN design since the entire network load falls on
a single controller. The delay is reduced because there are multiple controllers
in the proposed architecture. The proposed node selection algorithm is applied
to the distributed SDN system and proactively forwards the packets over less
congested links. It therefore delivers the packets faster than the existing sys-
tem. Fig. 4.19 shows the total delay for 20 IoT gateways (FL workers) as a
function of the number of communication rounds. We see that the delay in
the proposed distributed SDN approach augmented by the node selection al-
gorithm for many communication rounds is much lower than the delay for the
other systems.
In paper IV, we evaluated our proposed approach by training visual geom-
etry group (VGG) networks [30]. The VGG network is a standard convolu-
tional neural network (CNN) architecture and has several layers. In this paper,
we used two versions of the VGG network, namely VGG-5 (five layers) and
VGG-8 (eight layers). Each of these networks has three fully connected layers,
and the rest of the layers are convolutional layers. In order to simulate non-IID
data, we have defined four different classes, as mentioned in the paper. The
performance metrics for the evaluation of the proposed system are the number
of communication rounds and accuracy. As stated in the previous section, we
performed this comparison using two deep convolutional networks on various

61
Figure 4.22: Delay for various numbers of IoT gateways.

Figure 4.23: Delay for various numbers of communication rounds.

levels of non-IID data. We conducted a comparison between our proposed


algorithm and the widely known FedAvg algorithm.
When dealing with imbalanced data, the accuracy metric might not provide
the most insightful representation of a model’s performance, as it is sensitive
to the proportions of the target classes. Given this, considering the accuracy
could potentially lead to misleading results, where the model appears to per-
form well due to its effectiveness in identifying the majority class while miss-
ing the minority class. The area under the precision-recall curve (AUPRC) is
indeed a more appropriate measure in such cases. Precision-recall curves are
more informative than ROC curves when dealing with class imbalance, as they
focus on the minority class performance, which is often of the most interest
in these scenarios. The AUPRC measures the tradeoff between the precision,
which is the proportion of true positive results out of all positive predictions,

62
and the recall, which is the proportion of true positive results identified cor-
rectly. However, we chose the accuracy as the evaluation metric. Our choice
to use accuracy was based on the fact that we defined and experimented with
different levels of non-IID-ness. In situations in which the level of non-IID-
ness is low, the accuracy is a highly relevant and effective metric for gauging
the performance of our client selection method and the resulting model.
Figure 4.20 illustrates the accuracy of VGG-5 across different levels of
non-IID data. In each round of training, 10 out of 15 clients were used, and the
training process involved 20 epochs. Fig. 4.30 shows the results for the level
of non-IID data in which all the data belong to one label (Class-1). However,
our proposed method initially has a lower accuracy than FedAvg; it eventually
reaches a higher accuracy in the later rounds of the learning process. The
learning curve for both algorithms at this level of non-IID data has erratic
fluctuations, which is normal.
For Class-2E non-IID data, where the data are evenly distributed between
two labels, our proposed algorithm is more accurate from the beginning of the
training process to the end of the training process, as shown in Fig. 4.20.
For Class-8 and Class-5, as shown in Fig. 4.20, our proposed algorithm
achieves a higher accuracy than FedAvg from the beginning of the process to
the end of the process. However, it is worth noting that FedAvg can achieve
the same accuracy as our proposed algorithm but with more rounds of training.
Overall, our proposed method is more accurate than FedAvg in the initial
training rounds for all levels of non-IID data except for Class-1, for which the
proposed method is more accurate towards the end of the training period. It
is essential to note that for this level of non-IID-ness, it is not expected that
any method can be significantly superior, as it represents the highest degree of
non-IID-ness.

63
Figure 4.24: Accuracy for different numbers of communication rounds.

64
5. Concluding Remarks

The chapter discusses the contributions of this dissertation, which addresses


challenges in distributed intelligence in IoT systems. This chapter revisits the
research objectives and reflects on the achievements of the dissertation. The
chapter concludes by proposing future research directions for enabling fully
functional autonomic computing in the IoT.

5.1 Discussion
This dissertation aims to tackle the challenges that the IoT will face due to
the rapid increase in connected devices and the data volume. The IoT is ex-
pected to involve billions of connected devices, generating a massive amount
of largely unanalyzed and underutilized data. While cloud- and edge-based
solutions have been proposed to address this issue, they have been found to
be ineffective in addressing the current and future challenges of the IoT. As a
result, this dissertation proposes a new approach focusing on edge-based so-
lutions. This thesis presents a two-level distributed intelligence approach that
aims to efficiently use and analyze contextual information. The DSR method
is employed to develop several artifacts that address the research question, and
these artifacts are described in Chapter 3 and spread across six publications.
The contributions from these publications help to address the research ques-
tion and close the existing gap in the limitations of this type of approach. The
achievements of this dissertation are summarized in the following subsections,
which reflect on each of the research sub-questions before addressing the over-
all research question. The first research sub-question, which is related to the
system model enabling a distributed intelligence approach and reflecting the
system’s performance in real-life scenarios, is answered primarily in subsec-
tions 4.1.1, 4.1.2, and 4.1.3. A two-level distributed intelligence model, one
of the contributions of this dissertation, is proposed; in this model, both edge
and cloud computing are explored, as prior research has shown that central-
ized approaches fail to counter the scalability issue, among many other issues.
An implementation blueprint was also described and verified in papers I and
II, which investigate its performance. Paper II also evaluates the design of an
IoT gateway that implements low-level intelligence in terms of the delay and

65
required bandwidth. This particular evaluation helped to realize the feasibility
of the proposed physically distributed intelligence approach for real-life sce-
narios. The second sub-question is related to suitable reasoners for distributed
intelligence in IoT systems, and it is answered primarily in subsections 4.1.1,
4.1.2, and 4.1.3. In papers I and II, a fuzzy logic reasoner has been proposed to
deal with data uncertainty. In paper II, an automatic rule generator algorithm
has been proposed to generate rules for the fuzzy logic reasoner automatically
given the environmental circumstances. Then, the fuzzy logic controller was
replaced with FL in paper III to evaluate the performance of our method and
another reasoning method for the specified purpose. The proposed FL model
has been evaluated by reporting the error of the FL prediction for different
numbers of worker nodes. The third research sub-question that this thesis ad-
dresses is answered in papers IV, V, and VI and subsections 4.1.1, 4.1.2, and
4.1.3 of this dissertation. The question of how and to what extent the scala-
bility of distributed intelligence can be facilitated in terms of communication
between edge devices and the cloud is answered by exploiting the concept of
SDN for fixed networks and O-RAN-based systems for mobile networks. In
paper IV, distributed SDN is integrated into the two-tier architecture proposed
in the first paper. The architecture consists of high-level intelligence and lower-
level intelligence. The lower level of intelligence operates at the edge of the
network, and the high level operates in the cloud.
The final research sub-question that this thesis addresses is answered in
papers IV and VI and subsections 4.1.1, 4.1.2, and 4.1.3 of this dissertation.
The question of how to organize the controllers with maximal autonomy of the
system is answered by exploiting the self-organization concept of distributed
SDN, and O-RAN-based 5G networks are designed, developed, and evaluated
here in order to support self-organization. Self-organization implies that an en-
tity, that is, a thing, should optimize itself according to policies set by outside
sources. The results suggest that it is possible to structure a system, such as
gateways/controllers, to manage the network in an organized way and that fur-
ther correct evolution can be ensured with minimal intervention from outside
sources.
It is also worth mentioning that the edge computing field is quite diverse,
with different architectures being designed and optimized for a variety of sce-
narios, setups, and device types. These architectures often depend heavily on
the specifics of the hardware used, the characteristics of the network infrastruc-
ture, the nature of the IoT devices involved, the kinds of data they generate, and
the specific application requirements.
Consequently, orchestrating a fair and meaningful comparison between
different edge-based architectures can be an incredibly intricate and challeng-
ing undertaking. It necessitates controlling for a broad spectrum of variables

66
and potentially requires access to a diverse array of hardware and setups, a
condition that may not always be attainable.
Conversely, comparing edge computing to cloud computing spotlights a
more macro-level distinction between centralized processing and storage ver-
sus distributed processing and storage. This comparison can be conducted in
a more straightforward manner by examining the impacts of these approaches
on IoT applications and their performance requirements, such as latency, band-
width, energy consumption, and data privacy.
Given this context, the approach to evaluation has been twofold. This re-
search has endeavored to conduct comparisons with both other edge-based
methods and cloud-based approaches whenever possible. This dual compar-
ison strategy makes it possible to assess the proposed method’s performance
in the broader scope of edge computing while also taking into account the
larger architectural implications of edge computing versus cloud computing.
This way, it was possible to evaluate the effectiveness of the proposed method
across a variety of pieces of hardware, network infrastructures, and application
requirements, thus providing a comprehensive analysis of its performance and
scalability.

5.2 Conclusion
In conclusion, the fast-paced evolution and expansion of the IoT is chang-
ing the landscape of digital communication, requiring innovative solutions to
manage its growing complexities. The increasing proliferation of connected
devices, coupled with escalating data volumes, present challenges in terms of
system efficiency, response time, scalability, and data security. Recognizing
these challenges, this dissertation presents a pioneering approach that seeks to
push the boundaries of traditional IoT management. Central to our proposed
approach is the emphasis on edge-based solutions. By transitioning more com-
putational tasks to the edge – closer to where data is generated – we can poten-
tially reduce latency, enhance real-time processing capabilities, and mitigate
the strains on central servers and communication networks. Moreover, the
adoption of a two-tier distributed intelligence model ensures that data process-
ing and analysis are not only faster but also contextually more relevant. This
approach aligns with the modern need for dynamic IoT ecosystems that can
autonomously adapt to their environment, thereby offering better services and
more robust security to end-users. Furthermore, our findings underscore the
significance of distributing intelligence in the IoT landscape. As the bound-
aries of what’s possible in the digital realm continue to expand, a centralized
approach risks becoming too cumbersome and inflexible. Distributed intelli-
gence, by its very nature, offers a more agile and adaptable solution, ensuring

67
IoT systems are equipped to handle future challenges, be they in scale, variety,
or complexity.
Delving into specifics:

• Firstly, we explored a system model that champions distributed intelli-


gence and mirrors real-life performance. This was principally addressed
in Paper II and III, where we proposed a two-tier distributed intelligence
model, validated its feasibility, and evaluated its performance.

• Secondly, we sought to determine suitable reasoners for distributed intel-


ligence in IoT systems. Papers 1, 2, and 3 provide insights into this area.
The proposal of a fuzzy logic reasoner and an automatic rule generator
algorithm in Papers 1 and 2 sheds light on addressing data uncertainty.
Meanwhile, Paper 3 introduces the idea of replacing the fuzzy logic con-
troller with federated learning as an alternative reasoning method.

• Thirdly, our research assessed the scalability of distributed intelligence,


especially concerning communication between edge devices and the cloud.
Papers IV, V, and VI delve into this, with a highlight on integrating
Software-Defined Networking (SDN) and O-RAN-based systems to en-
hance scalability. Specifically, Paper IV integrates SDN into our two-
tier architecture, revealing a multi-layered intelligence system with both
edge and cloud components.

• Lastly, our exploration steered towards understanding how to optimize


system autonomy. Papers IV and VI tackle this, demonstrating how
the self-organization concepts of distributed SDN and O-RAN based 5G
networks can be harnessed. Results indicate the viability of designing
a system where gateways/controllers manage network organization au-
tonomously, with minimal external intervention.

5.3 Future Work


Despite the contributions made in this dissertation toward addressing the chal-
lenges of the IoT through a distributed intelligence approach, there is still room
for further research in this area. Moving forward, a noteworthy avenue for up-
coming investigations and studies is centered around probing the scalability of
the existing system. As the IoT continues to experience a surge in demand,
coupled with the escalating complexity of both mobile and fixed networks, the
capability of the current system to efficiently scale up and increase its reach be-
comes an essential focal point. Another area of future work could be exploring
the use of advanced ML techniques, such as deep transfer learning, to improve

68
the accuracy of the reasoning and decision-making process at the edge of the
network. Additionally, these techniques could be used for optimizing the net-
work parameters. This could potentially lead to the more efficient and effective
use of contextual information. In addition to the aforementioned areas, another
potential area of future work could be investigating the dynamic offloading of
computing tasks between the edge and the cloud. The proposed distributed
intelligence approach focuses on utilizing edge devices for processing and an-
alyzing contextual information. Still, there may be instances where edge de-
vices are unable to handle the workload. In such cases, offloading some of the
computing tasks to the cloud can be beneficial. Future research could inves-
tigate the development of dynamic offloading algorithms that can determine
the optimal distribution of computing tasks between the edge and the cloud
based on various factors, such as the available resources, network conditions,
and energy consumption. Additionally, it may be beneficial to investigate us-
ing edge-based ML models to predict when offloading may be necessary and
optimize the offloading process. Furthermore, as the number of connected de-
vices continues to increase, it is important to consider the energy consumption
and sustainability of the proposed approach. Future work could investigate the
development of energy-efficient algorithms and protocols for the distributed
intelligence approach, as well as explore renewable energy sources to power
edge devices.
Lastly, it could be interesting to investigate the interoperability and stan-
dardization of the proposed approach across different IoT platforms and sys-
tems. Interoperability is crucial for the seamless integration of various IoT
devices and systems. Standardization could ensure that the proposed approach
can be applied to a wide range of IoT applications and use cases.

69
70
References

[1] PARTHA P RATIM R AY. A survey on Internet of Things architectures. Journal of King Saud
University-Computer and Information Sciences, 30(3):291–319, 2018. 1

[2] D RAGOS M OCRII , Y UXIANG C HEN , AND P ETR M USILEK. IoT-based smart homes: A review of
system architecture, software, communications, privacy and security. Internet of Things, 1:81–
98, 2018. 1

[3] H AMIDREZA A RASTEH , VAHID H OSSEINNEZHAD , V INCENZO L OIA , AURELIO T OMMASETTI ,


O RLANDO T ROISI , M IADREZA S HAFIE - KHAH , AND P IERLUIGI S IANO. IoT-based smart cities:
A survey. In 2016 IEEE 16th International Conference on Environment and Electrical Engineering
(EEEIC), pages 1–6. IEEE, 2016. 1

[4] N IKESH G ONDCHAWAR , R.S. K AWITKAR , ET AL . IoT based smart agriculture. International
Journal of Advanced Research in Computer and Communication Engineering, 5(6):838–842, 2016.
1

[5] S TEPHANIE B. BAKER , W EI X IANG , AND I AN ATKINSON. Internet of Things for smart health-
care: Technologies, challenges, and opportunities. IEEE Access, 5:26521–26544, 2017. 1

[6] Internet of Things (IoT) and non-IoT active device connections worldwide
from 2010 to 2025. https://www.statista.com/statistics/1101442/
iot-number-of-connected-devices-worldwide/. Accessed: 2023-02-14. 1

[7] M EHDI M OHAMMADI , A LA A L -F UQAHA , S AMEH S OROUR , AND M OHSEN G UIZANI. Deep
learning for IoT big data and streaming analytics: A survey. IEEE Communications Surveys
& Tutorials, 20(4):2923–2960, 2018. 1

[8] Businesses are not using IoT data effectively. https://www.iottechnews.com/news/2022/


feb/23/inmarsat-finds-businesses-not-using-iot-data-effectively/, 2022. Ac-
cessed: 2023-06-14. 1

[9] S AMIYA K HAN , K ASHISH A RA S HAKIL , AND M ANSAF A LAM. Cloud-based big data analyt-
ics—A survey of current research and future directions. Big Data Analytics: Proceedings of CSI
2015, pages 595–604, 2018. 1

[10] G OPIKA P REMSANKAR , M ARIO D I F RANCESCO , AND TARIK TALEB. Edge computing for the
Internet of Things: A case study. IEEE Internet of Things Journal, 5(2):1275–1284, 2018. 2

[11] M OHAMED K. A BDEL -A ZIZ , C HEN -F ENG L IU , S UMUDU S AMARAKOON , M EHDI B ENNIS , AND
WALID S AAD. Ultra-reliable low-latency vehicular networks: Taming the age of information
tail. In 2018 IEEE Global Communications Conference (GLOBECOM), pages 1–7. IEEE, 2018. 2

[12] J IALE Z HANG , B ING C HEN , YANCHAO Z HAO , X IANG C HENG , AND F ENG H U. Data security
and privacy-preserving in edge computing paradigm: Survey and open issues. IEEE Access,
6:18209–18237, 2018. 2
[13] E JAZ A HMED , A RIF A HMED , I BRAR YAQOOB , J UNAID S HUJA , A BDULLAH G ANI , M UHAMMAD
I MRAN , AND M UHAMMAD S HOAIB. Bringing computation closer toward the user network: Is
edge computing the solution? IEEE Communications Magazine, 55(11):138–144, 2017. 2

[14] DAVID G UNNING , M ARK S TEFIK , JAESIK C HOI , T IMOTHY M ILLER , S IMONE S TUMPF,
AND G UANG -Z HONG YANG . XAI—Explainable artificial intelligence. Science Robotics,
4(37):eaay7120, 2019. 2

[15] E MNA BACCOUR , NARAM M HAISEN , A LAA AWAD A BDELLATIF, A IMAN E RBAD , A MR M O -
HAMED , M OUNIR H AMDI , AND M OHSEN G UIZANI . Pervasive AI for IoT applications: A survey
on resource-efficient distributed artificial intelligence. IEEE Communications Surveys & Tutorials,
2022. 2, 3

[16] M ARIA R. E BLING. Pervasive computing and the Internet of Things. IEEE Pervasive Computing,
15(1):2–4, 2016. 2, 3

[17] A MRITA C HAKRABORTY AND A RPAN K UMAR K AR. Swarm intelligence: A review of algo-
rithms. Nature-inspired Computing and Optimization: Theory and Applications, pages 475–494,
2017. 3

[18] D ZAKY Z AKIYAL FAWWAZ AND S ANG -H WA C HUNG. Real-time and robust hydraulic system
fault detection via edge computing. Applied Sciences, 10(17):5933, 2020. 4

[19] O LA S ALMAN , I MAD E LHAJJ , AYMAN K AYSSI , AND A LI C HEHAB. Edge computing enabling
the Internet of Things. In 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT), pages
603–608. IEEE, 2015. 5, 20

[20] B LESSON VARGHESE , NAN WANG , S AKIL BARBHUIYA , P ETER K ILPATRICK , AND D IMITRIOS S.
N IKOLOPOULOS. Challenges and opportunities in edge computing. In 2016 IEEE International
Conference on Smart Cloud (SmartCloud), pages 20–26. IEEE, 2016. 5

[21] H ASIBUR R AHMAN AND R AHIM R AHMANI. Enabling distributed intelligence assisted future
Internet of Things controller (FITC). Applied Computing and Informatics, 14(1):73–87, 2018. 5,
16

[22] K EVIN A SHTON ET AL . That ‘Internet of Things’ thing. RFID Journal, 22(7):97–114, 2009. 13

[23] M OEEN H ASSANALIERAGH , A LEX PAGE , T OLGA S OYATA , G AURAV S HARMA , M EHMET A K -
TAS , G ONZALO M ATEOS , B URAK K ANTARCI , AND S ILVANA A NDREESCU . Health monitoring
and management using Internet-of-Things (IoT) sensing with cloud-based processing: Oppor-
tunities and challenges. In 2015 IEEE International Conference on Services Computing, pages
285–292. IEEE, 2015. 13

[24] E VERTON C AVALCANTE , J ORGE P EREIRA , M ARCELO P ITANGA A LVES , P EDRO M AIA , RON -
ICELI M OURA , T HAIS BATISTA , F LAVIA C. D ELICATO , AND PAULO F. P IRES . On the interplay
of Internet of Things and cloud computing: A systematic mapping study. Computer Communi-
cations, 89:17–33, 2016. 13

[25] M OHAMMAD A AZAM , I MRAN K HAN , AYMEN A BDULLAH A LSAFFAR , AND E UI -NAM H UH.
Cloud of Things: Integrating Internet of Things and cloud computing and the issues involved.
In Proceedings of 2014 11th International Bhurban Conference on Applied Sciences & Technology
(IBCAST) Islamabad, Pakistan, Jan. 14–18, 2014, pages 414–419. IEEE, 2014. 13

[26] S HAIK M ASTHAN BABU , A JAYA L AKSHMI , AND B T HIRUMALA R AO. A study on cloud based
Internet of Things: CloudIoT. In 2015 global conference on communication technologies (GCCT),
pages 60–65. IEEE, 2015. 13

[27] F EI L I , M ICHAEL VÖGLER , M ARKUS C LAESSENS , AND S CHAHRAM D USTDAR. Towards auto-
mated IoT application deployment by a cloud-based approach. In 2013 IEEE 6th International
Conference on Service-Oriented Computing and Applications, pages 61–68. IEEE, 2013. 14
[28] F LAVIO B ONOMI , RODOLFO M ILITO , J IANG Z HU , AND S ATEESH A DDEPALLI. Fog computing
and its role in the Internet of Things. In Proceedings of the First Edition of the MCC Workshop on
Mobile Cloud Computing, pages 13–16, 2012. 14

[29] Y UAN -YAO S HIH , W EI -H O C HUNG , A I -C HUN PANG , T E -C HUAN C HIU , AND H UNG -Y U W EI.
Enabling low-latency applications in fog-radio access networks. IEEE Network, 31(1):52–58,
2016. 14

[30] Z HAOLONG N ING , X IANGJIE KONG , F ENG X IA , W EIGANG H OU , AND X IAOJIE WANG. Green
and sustainable cloud of things: Enabling collaborative edge computing. IEEE Communications
Magazine, 57(1):72–78, 2018. 14

[31] R ENE M UELLER , JAN S. R ELLERMEYER , M ICHAEL D ULLER , AND G USTAVO A LONSO. A
generic platform for sensor network applications. In 2007 IEEE International Conference on
Mobile Adhoc and Sensor Systems, pages 1–3. IEEE, 2007. 14

[32] J ONG -WAN YOON , YONG - KI K U , C HOON -S UNG NAM , AND D ONG -RYE S HIN. Sensor network
middleware for distributed and heterogeneous environments. In 2009 International Conference
on New Trends in Information and Service Science (NISS 2009), pages 979–982. IEEE Computer
Society, 2009. 14

[33] DANIEL B IMSCHAS , H ORST H ELLBRÜCK , R ICHARD M IETZ , D ENNIS P FISTERER , K AY R ÖMER ,
AND T ORSTEN T EUBLER . Middleware for smart gateways connecting sensornets to the inter-
net. In Proceedings of the 5th International Workshop on Middleware Tools, Services and Run-Time
Support for Sensor Networks, pages 8–14, 2010. 14

[34] S HANG G UOQIANG , C HEN YANMING , Z UO C HAO , AND Z HU YANXU. Design and implemen-
tation of a smart IoT gateway. In 2013 IEEE International Conference on Green Computing and
Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, pages
720–723. IEEE, 2013. 14

[35] J IALI B IAN , D ENGKE FAN , AND J UNMING Z HANG. The new intelligent home control system
based on the dynamic and intelligent gateway. In 2011 4th IEEE International Conference on
Broadband Network and Multimedia Technology, pages 526–530. IEEE, 2011. 14

[36] W EI S HEN , YOUZHI X U , D ONGHUA X IE , T INGTING Z HANG , AND A LF J OHANSSON. Smart


border routers for ehealthcare wireless sensor networks. In 2011 7th International Conference on
Wireless Communications, Networking and Mobile Computing, pages 1–4. IEEE, 2011. 15

[37] V LADIMIR S TANTCHEV, A HMED BARNAWI , S ARFARAZ G HULAM , J OHANNES S CHUBERT, AND
G ERRIT TAMM. Smart items, fog and cloud computing as enablers of servitization in healthcare.
Sensors & Transducers, 185(2):121–128, 2014. 15

[38] A MIR M. R AHMANI , T UAN N GUYEN G IA , B EHAILU N EGASH , A RMAN A NZANPOUR , I MAN
A ZIMI , M INGZHE J IANG , AND PASI L ILJEBERG. Exploiting smart e-Health gateways at the
edge of healthcare Internet-of-Things: A fog computing approach. Future Generation Computer
Systems, 78:641–658, 2018. 15

[39] I MAN A ZIMI , A RMAN A NZANPOUR , A MIR M. R AHMANI , TAPIO PAHIKKALA , M ARCO L EVO -
RATO , PASI L ILJEBERG , AND N IKIL D UTT . HiCH: Hierarchical fog-assisted computing architec-
ture for healthcare IoT. ACM Transactions on Embedded Computing Systems (TECS), 16(5s):1–20,
2017. 15

[40] IBM C ORPORATION. An architectural blueprint for autonomic computing. IBM White Paper,
31(2006):1–6, 2006. 15

[41] A MIT BADLANI AND S UREKHA B HANOT. Smart home system design based on artificial neural
networks. In Proceedings of the World Congress on Engineering and Computer Science, 1, pages
19–21, 2011. 15
[42] A LI H USSEIN , M EHDI A DDA , M IRNA ATIEH , AND WALID FAHS. Smart home design for disabled
people based on neural networks. Procedia Computer Science, 37:117–126, 2014. 15

[43] H OMAY DANAEI M EHR , H USEYIN P OLAT, AND AYDIN C ETIN. Resident activity recognition in
smart homes by using artificial neural networks. In 2016 4th International Istanbul Smart Grid
Congress and Fair (ICSG), pages 1–5. IEEE, 2016. 15

[44] J IHO PARK , K IYOUNG JANG , AND S UNG -B ONG YANG. Deep neural networks for activity recog-
nition with multi-sensor data in a smart home. In 2018 IEEE 4th World Forum on Internet of
Things (WF-IoT), pages 155–160. IEEE, 2018. 15

[45] PAN WANG , F ENG Y E , AND X UEJIAO C HEN. A smart home gateway platform for data collection
and awareness. IEEE Communications Magazine, 56(9):87–93, 2018. 15

[46] ROBERTA C ALEGARI , G IOVANNI C IATTO , S TEFANO M ARIANI , E NRICO D ENTI , AND A NDREA
O MICINI. LPaaS as micro-intelligence: Enhancing IoT with symbolic reasoning. Big Data and
Cognitive Computing, 2(3):23, 2018. 15

[47] A LI S. A LLAHLOH AND S ARFRAZ M OHAMMAD. Development of the intelligent oil field with
management and control using IIOT (Industrial Internet of Things). In 2018 2nd IEEE Interna-
tional Conference on Power Electronics, Intelligent Control and Energy Systems (ICPEICES), pages
815–820. IEEE, 2018. 16

[48] TARIQ A LSBOUI , YONGRUI Q IN , R ICHARD H ILL , AND H USSAIN A L -AQRABI. Enabling dis-
tributed intelligence for the Internet of Things with IOTA and mobile agents. Computing,
102:1345–1363, 2020. 16

[49] P RANEETH V EPAKOMMA , OTKRIST G UPTA , T RISTAN S WEDISH , AND R AMESH R ASKAR. Split
learning for health: Distributed deep learning without sharing raw patient data. arXiv preprint.
arXiv:1812.00564, 2018. 17

[50] YANSONG G AO , M INKI K IM , S HARIF A BUADBBA , Y EONJAE K IM , C HANDRA T HAPA ,


K YUYEON K IM , S EYIT A. C AMTEPE , H YOUNGSHICK K IM , AND S URYA N EPAL. End-to-end
evaluation of federated learning and split learning for Internet of Things. arXiv preprint.
arXiv:2003.13376, 2020. 17

[51] C HANDRA T HAPA , PATHUM C HAMIKARA M AHAWAGA A RACHCHIGE , S EYIT C AMTEPE , AND
L ICHAO S UN. Splitfed: When federated learning meets split learning. In Proceedings of the
AAAI Conference on Artificial Intelligence, 36, pages 8485–8493, 2022. 18

[52] JAKUB KONE ČN Ỳ , H. B RENDAN M C M AHAN , F ELIX X. Y U , P ETER R ICHTÁRIK ,


A NANDA T HEERTHA S URESH , AND DAVE BACON. Federated learning: Strategies for
improving communication efficiency. arXiv preprint. arXiv:1610.05492, 2016. 18

[53] B RENDAN M C M AHAN , E IDER M OORE , DANIEL R AMAGE , S ETH H AMPSON , AND
B LAISE AGUERA Y A RCAS. Communication-efficient learning of deep networks from
decentralized data. In Artificial Intelligence and Statistics, pages 1273–1282. PMLR, 2017. 18

[54] TAKAYUKI N ISHIO AND RYO YONETANI. Client selection for federated learning with heteroge-
neous resources in mobile edge. In 2019 IEEE International Conference on Communications (ICC),
pages 1–7. IEEE, 2019. 18

[55] H OULIN Z HAO. Assessing the economic impact of artificial intelligence. ITU Trends. Emerging
Trends in ICTs, 1, 2018. 18

[56] S HIQIANG WANG , T IFFANY T UOR , T HEODOROS S ALONIDIS , K IN K. L EUNG , C HRISTIAN


M AKAYA , T ING H E , AND K EVIN C HAN. Adaptive federated learning in resource constrained
edge computing systems. IEEE Journal on Selected Areas in Communications, 37(6):1205–1221,
2019. 18
[57] F ELIX S ATTLER , S IMON W IEDEMANN , K LAUS -ROBERT M ÜLLER , AND W OJCIECH S AMEK. Ro-
bust and communication-efficient federated learning from non-iid data. IEEE Transactions on
Neural Networks and Learning Systems, 31(9):3400–3413, 2019. 19

[58] H AO WANG , Z AKHARY K APLAN , D I N IU , AND BAOCHUN L I. Optimizing federated learning on


non-iid data with reinforcement learning. In IEEE INFOCOM 2020-IEEE Conference on Com-
puter Communications, pages 1698–1707. IEEE, 2020. 19

[59] S AI Q IAN Z HANG , J IEYU L IN , AND Q I Z HANG. A multi-agent reinforcement learning approach
for efficient client selection in federated learning. In Proceedings of the AAAI Conference on
Artificial Intelligence, 36, pages 9091–9099, 2022. 19

[60] Transforming radio access networks towards open, intelligent, virtualized and fully interopera-
ble RAN. https://www.o-ran.org/. Accessed: 2023-06-14. 19

[61] Multi-access edge computing. https://www.etsi.org/technologies/


multi-access-edge-computing. Accessed: 2023-06-14. 19

[62] BAO -S HUH PAUL L IN I ET AL . Toward an AI-enabled O-RAN-based and SDN/NFV-driven 5G&
IoT network era. Network and Communication Technologies, 6(1):1–6, 2021. 19

[63] M ICHELE P OLESE , L EONARDO B ONATI , S ALVATORE D’ ORO , S TEFANO BASAGNI , AND T OM -
MASO M ELODIA . Understanding O-RAN: Architecture, interfaces, algorithms, security, and
research challenges. IEEE Communications Surveys & Tutorials, 2023. 19

[64] FABIO G IUST, X AVIER C OSTA -P EREZ , AND A LEX R EZNIK. Multi-access edge computing: An
overview of ETSI MEC ISG. IEEE 5G Tech Focus, 1(4):4, 2017. 19

[65] S AMI K EKKI , WALTER F EATHERSTONE , YONGGANG FANG , P EKKA K UURE , A LICE L I ,
A NURAG R ANJAN , D EBASHISH P URKAYASTHA , F ENG J IANGPING , DANNY F RYDMAN , G IAN -
LUCA V ERIN , ET AL . MEC in 5G networks. ETSI White Paper, 28(2018):1–28, 2018. 19

[66] DARIO S ABELLA , A LESSANDRO VAILLANT, P EKKA K UURE , U WE R AUSCHENBACH , AND FABIO
G IUST. Mobile-edge computing architecture: The role of MEC in the Internet of Things. IEEE
Consumer Electronics Magazine, 5(4):84–91, 2016. 20

[67] I. C HIH -L IN , S ŁAWOMIR K UKLINSKÍ , AND TAO C HEN. A perspective of O-RAN integration
with MEC, SON, and network slicing in the 5G era. IEEE Network, 34(6):3–4, 2020. 20

[68] A NDRES G ARCIA -S AAVEDRA AND X AVIER C OSTA -P EREZ. O-RAN: Disrupting the virtualized
RAN ecosystem. IEEE Communications Standards Magazine, 5(4):96–103, 2021. 20

[69] BASEM A LMADANI , A BDURRAHMAN B EG , AND A SHRAF M AHMOUD. DSF: A distributed SDN
control plane framework for the east/west interface. IEEE Access, 9:26735–26754, 2021. 20

[70] Y UHONG L I , X IANG S U , J UKKA R IEKKI , T HEO K ANTER , AND R AHIM R AHMANI. A SDN-based
architecture for horizontal Internet of Things services. In 2016 IEEE International Conference on
Communications (ICC), pages 1–7. IEEE, 2016. 20

[71] R AÜL M UÑOZ , R ICARD V ILALTA , N OBORU YOSHIKANE , R AMON C ASELLAS , R ICARDO
M ARTÍNEZ , TAKEHIRO T SURITANI , AND I TSURO M ORITA. Integration of IoT, transport SDN,
and edge/cloud computing for dynamic distribution of IoT analytics and efficient use of network
resources. Journal of Lightwave Technology, 36(7):1420–1428, 2018. 21

[72] A KRAM H AKIRI , PASCAL B ERTHOU , A NIRUDDHA G OKHALE , AND S LIM A BDELLATIF.
Publish/subscribe-enabled software defined networking for efficient and scalable IoT commu-
nications. IEEE Communications Magazine, 53(9):48–54, 2015. 21

[73] K ASHIF NASEER Q URESHI , R AZA H USSAIN , AND G WANGGIL J EON. A distributed software
defined networking model to improve the scalability and quality of services for flexible green
energy internet for smart grid systems. Computers & Electrical Engineering, 84:106634, 2020. 21
[74] PAUL J OHANNESSON AND E RIK P ERJONS. A design science primer. CreateSpace, 2012. 23, 25

[75] JAN VOM B ROCKE , A LAN H EVNER , AND A LEXANDER M AEDCHE. Introduction to design sci-
ence research. Design Science Research. Cases, pages 1–13, 2020. 24

[76] S WEDISH R ESEARCH C OUNCIL. Good research practice. Swedish Research Council, 2017. 29

[77] DANIELA P OPESCUL AND M IRCEA G EORGESCU. Internet of Things–Some ethical issues. The
USV Annals of Economics and Public Administration, 13(2 (18)):208–214, 2014. 30

[78] G IANMARCO BALDINI , M AARTEN B OTTERMAN , R ICARDO N EISSE , AND M ARIACHIARA TAL -
LACCHINI . Ethical design in the Internet of Things. Science and Engineering Ethics, 24:905–925,
2018. 30

[79] W. P OLLARD. IoT governance, privacy and security issues. Eur. Res. Clust. Internet Things,
876:23–31, 2015. 30

[80] R AMIN F IROUZI , R AHIM R AHMANI , AND T HEO K ANTER. An autonomic IoT gateway for smart
home using fuzzy logic reasoner. Procedia Computer Science, 177:102–111, 2020. 33, 34, 35

[81] R AMIN F IROUZI , R AHIM R AHMANI , AND T HEO K ANTER. Distributed-reasoning for task
scheduling through distributed Internet of Things controller. Procedia Computer Science,
184:24–32, 2021. 36

[82] R AMIN F IROUZI , R AHIM R AHMANI , AND T HEO K ANTER. Federated learning for distributed
reasoning on edge computing. Procedia Computer Science, 184:419–427, 2021.

[83] R AMIN F IROUZI AND R AHIM R AHMANI. A distributed SDN controller for distributed IoT. IEEE
Access, 10:42873–42882, 2022.

[84] R AMIN F IROUZI AND R AHIM R AHMANI. 5G-enabled distributed intelligence based on O-RAN
for distributed IoT systems. Sensors, 23(1):133, 2022. 57

[85] R AMIN F IROUZI AND R AHIM R AHMANI. Minimum delay and resource allocation for IoT sys-
tems in O-RAN 5G networks. Internet of Things; Engineering Cyber Physical Human Systems.
33

[86] S HADI ATTARHA , KOOSHA H AJI H OSSEINY, G HASEM M IRJALILY, AND K IARASH M IZANIAN.
A load balanced congestion aware routing mechanism for software defined networks. In 2017
Iranian Conference on Electrical Engineering (ICEE), pages 2206–2210. IEEE, 2017. 56

[87] Photovoltaics generation and electrical vehicles charging data. https://dataport.


pecanstreet.org/. Accessed: 2023-06-14. 60

You might also like