ScienceDirect Citations 1723361654030
ScienceDirect Citations 1723361654030
ectively, one must possess expert knowledge in various fields (problem domain knowledge, optimization, parallel and distributed computing
nteractive Systems and Applications
n & Engineering Systems: Proceedings of the 23rd International Conference KES2019
storage to ensure the scalability and availability of applications. This paper explores an application that can be used as a decision support s
NTERprise Information Systems / ProjMAN – International Conference on Project MANagement / HCist – International Conference on Heal
ise Resource Planning, which are deployed as a monolithic entity on clients’ premises. Moreover, many clients, especially big organizations
d agility and flexibility that was missing from virtual machines. Though, this paradigm shift of using cloud-native application design principles
ple loosely coupled and free-spirited deployable components in the form of peers. These peers are heterogeneous and autonomous by natu
hey have hard limits on Quality of Service constraints that must be maintained, despite network fluctuations and varying peaks of load. Con
ctural designs play a central role in building a distributed systems and services. On one hand, they bring convenience and simplicity to build
nd microservice-based applications but is limited by the lack of high-quality datasets with diverse scenarios. Realistic workloads are the pre
ehicles, spacecraft computing, etc. Not only end users, but also computing infrastructures can change their location. The applications are c
ocess automation for verticals such as Smart City, Healthcare and Industry 4.0. Edge resources are limited when compared to traditional C
tinuous manner. We present the Theodolite method for benchmarking the scalability of distributed stream processing engines. Core of this m
nments, offering the benefits of traditional virtualization with reduced overhead. However, existing container networking solutions lack supp
and loosely coupled modules are deployed and scaled independently to compose cloud-native applications. Carrier-grade service providers
nt within distributed and resource-constrained Fog computing environments. As a cloud-native application architecture, the true power of m
h frequent changes in components to support customized production for personalized consumer needs. First, we introduce the cloud-native
has gained significant popularity. Service meshes involve the incorporation of proxies to handle communication between microservices, th
applications are often deployed on the cloud since it offers a convenient on-demand model for renting resources and easy-to-use elastic infr
s ushered in an unprecedented period of vigorous development. However, due to the mutability and complexity of cooperation procedures,
massive Internet-of-Things (IoT) devices around us and novel mobile applications, especially the computing-intensive and latency-sensitive
work and service-based interfaces. Moreover, the 5G architecture is suitable for the exploitation of the mobile technology for dedicated, non
lications are decomposed into multiple fine-grained microservices, strategically deployed on various fog nodes to support a wide range of Io
tion, parallel and distributed computing) and appropriate expensive software and hardware resources. In this regard, we present a cloud-na
at can be used as a decision support system for heart disease prognosis. It discusses deployment strategies on a cloud-native model and a
ist – International Conference on Health and Social Care Information Systems and Technologies 2022
any clients, especially big organizations, often require software products to be customized for their specific needs before deployment on pre
oud-native application design principles, containerization, microservices, high resilience, and on-demand scaling has created challenges fo
eterogeneous and autonomous by nature and behave according to their good interests. In a distributed environment, some peers subsidize
uations and varying peaks of load. Consequently, such applications must adapt elastically on-demand, and so must be capable of reconfigu
ring convenience and simplicity to build massively scalable distributed cloud-native applications and enable continuous development and d
enarios. Realistic workloads are the premise and basis of generating such AIOps datasets, with the session-based workload being one of th
ge their location. The applications are composed of many interdependent microservices with specific resource requirements. Latest state of
limited when compared to traditional Cloud data centres; hence the choice of proper resource management strategies in this context becom
eam processing engines. Core of this method is the definition of use cases that microservices implementing stream processing have to fulf
ontainer networking solutions lack support for applications requiring isolated link-layer communications among containers in different cluster
cations. Carrier-grade service providers are migrating their legacy applications to a microservice based architecture running on Kubernetes
ation architecture, the true power of microservices comes from their loosely coupled, independently deployable and scalable nature, enabli
ds. First, we introduce the cloud-native concept to build component integration manufacturing middleware, to realize the unity of software a
mmunication between microservices, thereby speeding up the development and deployment of microservice applications. However, the use
g resources and easy-to-use elastic infrastructures. Moreover, the modern software engineering disciplines exploit orchestration tools such
complexity of cooperation procedures, it is difficult to realize high-efficient security management on these microservices. Traditional centraliz
mputing-intensive and latency-sensitive ones. Meanwhile, featured by the rapid development of cloud-native technologies in recent years, de
e mobile technology for dedicated, non-public uses as an alternative to nation-wide deployments. The 5G core networks are a crucial part o
fog nodes to support a wide range of IoT scenarios, such as smart cities and smart farming. Nonetheless, the performance of these IoT app
s. In this regard, we present a cloud-native, container-based distributed optimization framework that enables efficient and cost-effective opt
rategies on a cloud-native model and an edge-optimized model. The application contains a customized prediction pipeline named ClassifyI
pecific needs before deployment on premises. Objective: However, as software vendors are migrating their monolithic software products to
mand scaling has created challenges for legacy orchestration systems. They were designed for handling virtual machine-based network func
ed environment, some peers subsidize others and could behave vulnerably or selfishly. These selfish peers severely affect the performanc
d, and so must be capable of reconfiguring themselves, along with the underlying cloud infrastructure, to satisfy their constraints. Software
enable continuous development and delivery for their services. On the other hand, they widen the surface of malicious intrusions, which, in
ession-based workload being one of the most typical examples. Due to privacy concerns, complexity, variety, and requirements for reasona
resource requirements. Latest state of the art in Cloud/Fog infrastructures is considering only the computational and network (latency, band
gement strategies in this context becomes paramount. Microservice and Function as a Service architectures support modular and agile patt
menting stream processing have to fulfill. For each use case, our method identifies relevant workload dimensions that might affect the scala
ns among containers in different clusters. These communications are fundamental to enable the seamless integration of cloud-native solutio
ed architecture running on Kubernetes which is an open source platform for orchestrating containerized microservice based applications. Ho
deployable and scalable nature, enabling distributed placement and dynamic composition across federated Fog and Cloud clusters. Thus, i
eware, to realize the unity of software and hardware components, including application, service, model, data, workflow, etc. On the edge an
service applications. However, the use of service meshes also increases the request latency because they elongate the packet transmissio
iplines exploit orchestration tools such as Kubernetes to run cloud applications based on a set of microservices packaged in containers. On
hese microservices. Traditional centralized access control has the defects of relying on a centralized cloud manager and a single point of fa
-native technologies in recent years, delivering Artificial-Intelligence (AI) capabilities in a microservice way in the MEC environments comes
he 5G core networks are a crucial part of this architectural paradigm shift, which aims at closing the gap between the telecommunications do
eless, the performance of these IoT applications is affected by their limited effectiveness in processing offloaded IoT requests originating fro
enables efficient and cost-effective optimization over platforms such as Amazon ECS/EKS, Azure AKS, and on-premise Kubernetes. The s
zed prediction pipeline named ClassifyIT with a custom neural network architecture called IPANN, supported by a feature selector named M
g their monolithic software products to Cloud-native Software-as-a-Service (SaaS), they face two big challenges that this paper aims at add
ing virtual machine-based network functions. Indeed, network slice orchestration requires interaction with multiple technological domain orc
h peers severely affect the performance of the distributed P2P network if not addressed adequately. In distributed microservices application
e, to satisfy their constraints. Software engineering tools and methodologies currently do not support such a paradigm. In this paper, we de
urface of malicious intrusions, which, in turn, without proper defense mechanisms, lessens their benefits to a certain degree. Among the bigg
y, variety, and requirements for reasonable intervention, it is difficult to copy or generate such workloads directly, showing the importance of
mputational and network (latency, bandwidth) resources during scheduling of microservices, but in a static way. The network variability whe
ectures support modular and agile patterns, compared to a monolithic design, through lightweight containerisation, continuous integration/d
dimensions that might affect the scalability of a use case. We propose to design one benchmark per use case and relevant workload dime
mless integration of cloud-native solutions in 5G and beyond networks. Accordingly, we present an SDN-enabled networking solution that su
ed microservice based applications. However, in this migration, service availability remains a concern. Service availability is measured as th
erated Fog and Cloud clusters. Thus, it is necessary to develop novel placement algorithms that utilise these microservice characteristics to
el, data, workflow, etc. On the edge and cloud, node services are deployed in containers to form a service mesh; microservices are orchest
e they elongate the packet transmission between services. After investigating the transmission path of packets in a representative service m
roservices packaged in containers. On the one hand, in order to ensure the users’ experience, it is necessary to allocate enough number o
cloud manager and a single point of failure. Meanwhile, decentralized mechanisms are defective by inconsistent policies defined by differen
e way in the MEC environments comes true nowadays. However, currently MEC systems are still restricted by the limited computing resour
ap between the telecommunications domain and the information technology world at large. The objective of this article is to discuss the ado
g offloaded IoT requests originating from multiple IoT devices. Specifically, the requested IoT services are composed of multiple dependent
KS, and on-premise Kubernetes. The solution consists of dozens of microservices scaled out using a specially developed PETAS Auto-scal
pported by a feature selector named MIST-CC and a regularizer named STIR. ClassifyIT was observed to give an accuracy of 87.16% on th
challenges that this paper aims at addressing: (1) How to migrate their exclusive monoliths to multi-tenant Cloud-native SaaS; and (2) How
with multiple technological domain orchestrators, access, transport, core network, and edge computing. The specifications and existing orc
In distributed microservices applications, these selfish peers are addressed in two ways, i.e. Identification and Mitigation of the selfish peers
such a paradigm. In this paper, we describe a framework that has been designed to meet these objectives, as part of the EU SWITCH proj
fits to a certain degree. Among the biggest threats of malicious intrusions are those that belong to the Distributed Denial of Service (DDoS)
ads directly, showing the importance of effective and intervenable workload simulation. In this paper, we formulate the task of workload simu
static way. The network variability when the nodes are moving in space is not considered. This leads to increased total latency, hindering th
ntainerisation, continuous integration/deployment and scaling. The advantages brought about by these technologies may initially seem obv
r use case and relevant workload dimension. We present a general benchmarking framework, which can be applied to execute the individu
DN-enabled networking solution that supports the creation of isolated link-layer virtual networks between containers across different Kubern
n. Service availability is measured as the percentage of time the service is provisioned. High Availability (HA) is achieved when the service
se these microservice characteristics to improve the performance of the applications. However, existing Fog computing frameworks lack su
ervice mesh; microservices are orchestrated in the middleware; continuous integration and deployment are used at the frontground to desig
of packets in a representative service mesh Istio, we observed that the service mesh dedicates approximately 25% of its time to packet tran
ecessary to allocate enough number of container instances before the workload intensity surges at runtime. On the other hand, renting exp
inconsistent policies defined by different participants. This paper first proposes a blockchain-based distributed access control policies and s
tricted by the limited computing resources and highly dynamic network topology, which leads to high service deployment/maintenance cost
ctive of this article is to discuss the adoption of software design concepts like microservices and cloud-nativeness in the context of mobile n
es are composed of multiple dependent microservice instances collectively referred to as a service plan (SP). Each SP comprises a series o
specially developed PETAS Auto-scaler based on predictive analytics. Existing schedulers, whether Kubernetes or commercial, do not tak
ed to give an accuracy of 87.16% on the Cleveland dataset, compared to 78.80% for a regular deep network. The addition of the MIST-CC
tenant Cloud-native SaaS; and (2) How to enable tenant-specific customizations for multi-tenant Cloud-native SaaS. Method: This paper su
ing. The specifications and existing orchestrators are made on top of the legacy virtual machine based network function orchestration. Henc
ation and Mitigation of the selfish peers. This study presents the AI-based approach towards identifying selfish peers in P2P microservices
ectives, as part of the EU SWITCH project. SWITCH offers a flexible co-programming architecture that provides an abstraction layer and an
e Distributed Denial of Service (DDoS) family. Such type of attacks are challenging because DDoS attacks are elevated hard-to-absorbed t
we formulate the task of workload simulation and propose a framework for Log-based Workload Simulation (LWS) in session-based system
s to increased total latency, hindering the Quality of Service (QoS) and network costs. Many researchers are focused only on a theoretical le
se technologies may initially seem obvious, but we argue that their usage at the Edge deserves a more in-depth evaluation. By analysing b
can be applied to execute the individual benchmarks for a given use case and workload dimension. Our framework executes an implemen
een containers across different Kubernetes clusters by building virtual circuits that dynamically adapt to changes in the topology. In this art
lity (HA) is achieved when the service is available at least 99.999% of the time. In this paper, we identify possible architectures for deployin
ing Fog computing frameworks lack support for integrating such placement policies due to their shortcomings in multiple areas, including M
nt are used at the frontground to design and produce. Second, by elaborating on the specific process of using the middleware for design an
oximately 25% of its time to packet transmission in the Linux kernel network stack. To shorten this process, we propose a non-intrusive solu
untime. On the other hand, renting expensive cloud-based resources can be unaffordable over a long period of time. Therefore, the choice
istributed access control policies and scheme, especially for microservices cooperation with dynamic access policies. We store the authoriz
service deployment/maintenance cost. Therefore, how to cost-effectively and robustly deploy edge AI microservices in failure-prone MEC e
d-nativeness in the context of mobile networks. Specifically, we will (i) advocate the need for a non-trivial adaptation of the 5G core network
an (SP). Each SP comprises a series of tasks designed to be executed in a predefined order, with the objective of meeting heterogeneous
Kubernetes or commercial, do not take into account the specifics of optimization based on evolutionary algorithms. Therefore, their perform
network. The addition of the MIST-CC feature selection algorithm to the deep network was shown to improve its accuracy to 81.97%, and i
ud-native SaaS. Method: This paper suggests an approach for migrating monoliths to microservice-based Cloud-native SaaS, providing cus
ed network function orchestration. Hence, this limitation constrains their approach to managing a cloud-native network function. To overcom
ing selfish peers in P2P microservices distributed networks with nature-inspired algorithms for feature selection. Efforts are also made to ge
at provides an abstraction layer and an underlying infrastructure environment, which can help to both specify and support the life cycle of tim
ttacks are elevated hard-to-absorbed threats and have a high degree of variability in types, design, and complexity. In this work, resilient ba
mulation (LWS) in session-based systems. LWS extracts the workload specification including the user behavior abstraction based on agglom
hers are focused only on a theoretical level. This paper proposes a novel technique for network-aware dynamic allocation of interdependent
ore in-depth evaluation. By analysing both the software development and deployment lifecycle, along with performance and resource utilisa
Our framework executes an implementation of the use case's dataflow architecture for different workloads of the given dimension and vario
t to changes in the topology. In this article, we introduce our solution, highlighting its advantages over existing alternatives, and provide a co
ntify possible architectures for deploying stateful microservice based applications with Kubernetes and evaluate Kubernetes from the persp
tcomings in multiple areas, including MSA application placement and deployment across multi-fog multi-cloud environments, dynamic micro
s of using the middleware for design and production, the improvement of the flexibility of customized production is demonstrated from three
ocess, we propose a non-intrusive solution that enables packets to bypass the kernel network stack through the implementation of socket r
g period of time. Therefore, the choice of a reactive auto-scaling method may significantly affect both response time and resource utilisation
c access policies. We store the authorized security policies on the blockchain to solve the inconsistent policy problem while enabling individu
AI microservices in failure-prone MEC environments has become a hot issue. In this study, we consider an edge AI microservice that can be
rivial adaptation of the 5G core network and a redesign of its functions into a microservice-based architecture, (ii) identify an approach to ac
e objective of meeting heterogeneous Quality of Service (QoS) requirements (e.g., low service delays). Different from the cloud, selecting t
nary algorithms. Therefore, their performance is not optimal in terms of results’ delivery time and cloud infrastructure costs. The proposed PE
o improve its accuracy to 81.97%, and it is further enhanced to 85.54% by adding STIR. This pipeline is then deployed on an application ba
ased Cloud-native SaaS, providing customers with a flexible customization opportunity, while taking advantage of the economies of scale th
ud-native network function. To overcome their challenges, we propose a novel Cloud-native Lightweight Slice Orchestration (CLiSO) framew
e selection. Efforts are also made to generate a real dataset SAMPARK followed by the standard procedure for the distributed P2P network
h specify and support the life cycle of time-critical cloud native applications. We describe the architecture, design and implementation of the
and complexity. In this work, resilient backpropagation neural network was used to build an intelligent network intrusion detection model aga
behavior abstraction based on agglomerative clustering as well as relational models and the intervenable workload intensity from session l
e dynamic allocation of interdependent microservices on moving infrastructure nodes, applicable in practice. It is composed of a generic MI
g with performance and resource utilisation, this paper explores microservices and two alternative types of serverless functions to build edg
kloads of the given dimension and various numbers of processing instances. This way, it identifies how resources demand evolves with incr
r existing alternatives, and provide a comprehensive design overview. Additionally, we validate it through an experiment, offering a deeper
nd evaluate Kubernetes from the perspective of availability it provides for its managed applications. The results of our experiments show tha
ulti-cloud environments, dynamic microservice composition across multiple distributed clusters, scalability of the framework to operate withi
production is demonstrated from three aspects: product, process, and resource. Last, the proposed approach is evaluated by building a cu
through the implementation of socket redirection and tc (traffic control) redirection with eBPF (extended Berkeley Packet Filter). We also co
h response time and resource utilisation. This paper presents a set of key factors which should be considered in the development of auto-sc
nt policy problem while enabling individual management of personalized access policies by the providers rather than a central authority. The
der an edge AI microservice that can be implemented by composing multiple Deep Neural Networks (DNN) models, in this way, features of
hitecture, (ii) identify an approach to achieve this objective and put it into practice by decomposing three exemplary network functions, both
s). Different from the cloud, selecting the appropriate service plan for each IoT request can be a challenging task in dynamic fog environme
d infrastructure costs. The proposed PETAS Auto-scaler elastically maintains an adequate number of worker pods following the exact pace
is then deployed on an application based on a cloud-native architecture that uses microservices. The design is expanded to an edge-optim
advantage of the economies of scale that the Cloud and multi-tenancy provide. We develop two proofs-of-concept to demonstrate our appr
ght Slice Orchestration (CLiSO) framework extending our previously proposed Lightweight edge Slice Orchestration (LeSO) framework. In a
ocedure for the distributed P2P network. The developed hybrid feature selection (FS) technique composed of Grey-Wolf-Optimization (GWO
ure, design and implementation of the SWITCH components and describe how such tools are applied to three time-critical real-world use c
network intrusion detection model against the most modernistic DDoS attacks in the cloud-native computing environment. We evaluated o
nable workload intensity from session logs. Then LWS combines the user behavior abstraction with the workload intensity to generate simu
practice. It is composed of a generic MILP optimization model and implementation in a Cloud/Fog platform. Several examples with sample E
pes of serverless functions to build edge real-time IoT analytics. In the experiments comparing these technologies, microservices generally e
ow resources demand evolves with increasing workloads. Within the scope of this paper, we present 4 identified use cases, derived from pro
ough an experiment, offering a deeper understanding of its functionality. Our work fills an existing gap for applications with inter-cluster link-
he results of our experiments show that the repair actions of Kubernetes cannot satisfy HA requirements, and in some cases cannot guara
ability of the framework to operate within federated environments, support for deploying heterogeneous microservice applications, etc. To th
approach is evaluated by building a customized production platform according to the middleware design concept. According to the need for
ed Berkeley Packet Filter). We also conduct comprehensive experiments on the widely-used Istio. The evaluation results show that our app
nsidered in the development of auto-scaling methods. Through a set of experiments, a discussion follows to help shed light on how such fa
ders rather than a central authority. Then we propose a graph-based decision-making scheme to achieve an efficient access control for micr
(DNN) models, in this way, features of different DNN models are aggregated and the deployment cost can be further reduced while fulfilling
hree exemplary network functions, both theoretically and practically, in microservices in charge of distinct responsibilities, and (iii) propose w
llenging task in dynamic fog environments due to the dependency and decentralization of microservice instances, along with the instability
f worker pods following the exact pace dictated by the demands of the optimization process. We evaluate the proposed framework’s perfor
e design is expanded to an edge-optimized architecture that improves scalability by moving part of the computation to the user device. The
ofs-of-concept to demonstrate our approach on migrating a reference application of Microsoft called SportStore to a customizable SaaS as w
e Orchestration (LeSO) framework. In addition, we present a technology-agnostic and deployment-oriented network slice template. To allow
posed of Grey-Wolf-Optimization (GWO) and Particle-Swarm-Optimization (PSO) is used for selecting the important feature subset. The fea
d to three time-critical real-world use cases.
mputing environment. We evaluated our proposed model using the benchmarking Canadian Institute for Cybersecurity evaluation CICDDo
he workload intensity to generate simulated workloads. The experimental evaluation is performed on an open-source cloud-native applicati
tform. Several examples with sample Edge-Native application are obtained. The results show reduction in the total end-to-end network late
technologies, microservices generally exhibit slightly better end-to-end processing latency and resource utilisation than serverless functions
4 identified use cases, derived from processing Industrial Internet of Things data, and 7 corresponding workload dimensions. We provide im
p for applications with inter-cluster link-layer networking access requirements in the cloud-native ecosystem.
ents, and in some cases cannot guarantee service recovery. Therefore, we propose an HA State Controller which integrates with Kubernet
us microservice applications, etc. To this end, we design and implement MicroFog, a Fog computing framework compatible with cloud-nativ
sign concept. According to the need for different types of flexibility, corresponding experiments were designed and further verified. Experime
he evaluation results show that our approach can significantly reduce the request latency by up to 21%. Furthermore, our approach decrea
lows to help shed light on how such factors influence the performance of auto-scaling methods under different workload conditions such as
ieve an efficient access control for microservices cooperation. Through the evaluations and experiments, it shows that our solution can real
st can be further reduced while fulfilling the Quality-of-Service (QoS) constraint. We propose a Three-Dimension-Dynamic-Programming-ba
tinct responsibilities, and (iii) propose ways forward towards the adoption and further extension of these concepts in beyond-5G mobile syst
ce instances, along with the instability of network conditions and service requests (i.e., change quickly over time). To deal with this challeng
luate the proposed framework’s performance using two real-world computationally demanding optimizations. The first use case belongs to
he computation to the user device. The machine learning pipeline is further enhanced using federated learning to improve localization and c
SportStore to a customizable SaaS as well as customizing another Microsoft’s microservices reference application called eShopOnContaine
riented network slice template. To allow zero-touch management of network slices, our framework provides a concept of Domain Specific H
g the important feature subset. The feature selection technique combines AI-enabled techniques to develop six hybrid models for the identi
e for Cybersecurity evaluation CICDDoS 2019 dataset. Our proposed detection model has achieved high reflective DDoS attack detection. T
n an open-source cloud-native application with both well-designed and public real-world workloads, showing that the simulated workload gen
ion in the total end-to-end network latency compared to the latest state of the art.
rce utilisation than serverless functions. One of the serverless functions and the microservices excel at handling larger data streams with a
ng workload dimensions. We provide implementations of 4 benchmarks with Kafka Streams and Apache Flink as well as an implementation
ntroller which integrates with Kubernetes and allows for application state replication and automatic service redirection to the healthy micros
framework compatible with cloud-native technologies such as Docker, Kubernetes and Istio. MicroFog provides an extensible and configura
designed and further verified. Experimental results show that the proposed middleware design approach can provide multiple flexibility for t
1%. Furthermore, our approach decreases CPU usage by 1.73% and reduces memory consumption by approximately 0.98% when compar
er different workload conditions such as on-and-off, predictable and unpredictable bursting workload patterns. Due to suitable results, the pr
ents, it shows that our solution can realize effective distributed access control at an affordable cost.
-Dimension-Dynamic-Programming-based algorithm (TDDP) to yield cost-effective multi-DNN orchestration and load allocation plans. For th
ese concepts in beyond-5G mobile systems.
ly over time). To deal with this challenge, we study the microservice instances selection problem for IoT applications deployed on fog platfo
izations. The first use case belongs to the manufacturing domain and involves optimization of the transportation pallets for train parts. The s
d learning to improve localization and collaborative learning. Both architectures are compared in a subjective fashion based on various para
ce application called eShopOnContainers. Results: We have shown not only the migration to microservices but also how to introduce the ne
ovides a concept of Domain Specific Handlers. The framework has been thoroughly evaluated via orchestrating OpenAirInterface container
develop six hybrid models for the identification of selfish peers in microservice networks. In experimentation, the effectiveness of all the prop
high reflective DDoS attack detection. Therefore, it is appropriate to defend against reflective DDoS attacks in containerized cloud-native pl
howing that the simulated workload generated by LWS is effective and intervenable, which provides the foundation of generating high-quali
at handling larger data streams with auto-scaling. Whilst serverless functions natively offer this feature, the choice of container orchestratio
che Flink as well as an implementation of our benchmarking framework to execute scalability benchmarks in cloud environments. We use b
ervice redirection to the healthy microservice instances by enabling service recovery in addition to the repair actions of Kubernetes. Based
og provides an extensible and configurable control engine that executes placement algorithms and deploys applications across federated F
oach can provide multiple flexibility for the production system, enabling the cyber-physical production system to support customized product
by approximately 0.98% when compared to the original service mesh implementation.
patterns. Due to suitable results, the proposed set of key factors are exploited in the PrEstoCloud software system for microservice-native c
stration and load allocation plans. For the robust deployment of the yield orchestration plan, we also develop a robust microservice instance
IoT applications deployed on fog platforms and propose a learning-based approach that employs Deep Reinforcement Learning (DRL) to co
nsportation pallets for train parts. The second use case belongs to the field of automated machine learning and includes neural architecture
rvices but also how to introduce the necessary infrastructure to support the new services and enable tenant-specific customization. Conclu
chestrating OpenAirInterface container network functions on public and private cloud platforms.
ntation, the effectiveness of all the proposed algorithms and models is found satisfactory, with an achieved maximum accuracy of 99.60% a
attacks in containerized cloud-native platforms. Experimental results indicate that the DDoS attack detection accuracy of the proposed resili
the foundation of generating high-quality AIOps datasets.
ure, the choice of container orchestration framework may determine its availability for microservices. The other serverless function, while su
marks in cloud environments. We use both for evaluating the Theodolite method and for benchmarking Kafka Streams' and Flink's scalabilit
e repair actions of Kubernetes. Based on experiments we evaluate our solution and compare the different architectures from the perspectiv
eploys applications across federated Fog environments. Furthermore, MicroFog provides a sufficient abstraction over container orchestratio
system to support customized production.
develop a robust microservice instance placement algorithm (TLLB) by considering the three levels of load balance including applications, s
ep Reinforcement Learning (DRL) to compute the optimal service plans. The latter optimizes the delay of application requests while effectiv
arning and includes neural architecture search and hyperparameter optimization. The results indicate an IaaS cost savings of up to 49% ca
e tenant-specific customization. Conclusions: Our customization-driven migration approach can guide a monolith to become SaaS having (s
etection accuracy of the proposed resilient neural network model is as high as 97.07% which outperforms most of the well-known learning m
The other serverless function, while supporting a simpler lifecycle, is more suitable for low-invocation scenarios and faces challenges with p
ng Kafka Streams' and Flink's scalability for different deployment options.
erent architectures from the perspective of availability and scaling overhead. The results of our investigations show that our solution can im
abstraction over container orchestration and dynamic microservice composition, thus enabling users to easily incorporate new placement p
f load balance including applications, servers, and DNN models. Experiments based on real-world edge environments have demonstrated
ay of application requests while effectively balancing the load among microservice instances. In our selection process, we carefully address
e an IaaS cost savings of up to 49% can be achieved, with almost unchanged result delivery time.
e a monolith to become SaaS having (synchronous and asynchronous) customization power for multi-tenant SaaS. Furthermore, our event-
orms most of the well-known learning models mentioned in the most related work. Moreover, the proposed model has achieved a competitiv
scenarios and faces challenges with parallel requests and inherent overhead, making it less suitable for real-time processing in demanding
stigations show that our solution can improve the recovery time of stateful microservice based applications by 50%.
s to easily incorporate new placement policies and evaluate their performance. The capabilities of the MicroFog framework, such as the sca
dge environments have demonstrated that the proposed orchestration and placement methods can achieve lower deployment costs and les
selection process, we carefully address the plan-dependency to efficiently select valid service plans for every request by introducing two dis
-tenant SaaS. Furthermore, our event-based customization approach can reduce the number of API calls to the main product while enablin
posed model has achieved a competitive run time performance that highly meets the delay requirements of containerized cloud computing.
ations by 50%.
e MicroFog framework, such as the scalability and flexibility of the design and deployment architecture of MicroFog and its ability to ensure
achieve lower deployment costs and less QoS loss when faced with edge node failures than traditional approaches.
or every request by introducing two distinct approaches; an action masking approach and an adaptive action mapping approach. Additional
calls to the main product while enabling different tenant-specific customization services for real-world scenarios.
e of MicroFog and its ability to ensure the deployment and composition of microservices across distributed fog–cloud environments, are va
al approaches.
e action mapping approach. Additionally, we propose an improved experience replay to address delayed action effects and enhance our m
d scenarios.
ibuted fog–cloud environments, are validated using multiple use cases. Experiments also demonstrate MicroFog’s ability to integrate and e
ayed action effects and enhance our model training efficiency. A series of experiments were conducted to assess the performance of our M
te MicroFog’s ability to integrate and evaluate novel placement policies and load-balancing techniques. To this end, we integrate multiple m
ed to assess the performance of our Microservice Instances Selection Policy (MISP) approach. The results demonstrate that our model red
es. To this end, we integrate multiple microservice placement policies to demonstrate MicroFog’s ability to support horizontally scaled place
results demonstrate that our model reduces the average failure rate by up to 65% and improves load balance by up to 45% on average whe
lity to support horizontally scaled placement, service discovery and load balancing of microservices across federated environments, thus re
balance by up to 45% on average when compared to the baseline algorithms.
across federated environments, thus reducing the application service response time up to 54%.