0% found this document useful (0 votes)
17 views

Paper_91-Quality_of_Service_Oriented_Data_Optimization

,

Uploaded by

Deepak Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Paper_91-Quality_of_Service_Oriented_Data_Optimization

,

Uploaded by

Deepak Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

(IJACSA) International Journal of Advanced Computer Science and Applications,

Vol. 15, No. 6, 2024

Quality of Service-Oriented Data Optimization in


Networks using Artificial Intelligence Techniques
Zhenhua Yang1#, Qiwen Yang2#, Minghong Yang3*
School of Information Engineering, Hunan Applied Technology University, Changde, China1
School of Computer Science, Beihang University, Beijing, China2
School of Economics and Management, Hunan Applied Technology University, Changde, China3

Abstract—This paper outlines a comprehensive AI-driven I. INTRODUCTION


Quality of Service (QoS) optimization method, presenting a
rigorous examination of its effectiveness through extensive Today’s world is undergoing unprecedented digital
experimentation and analysis. By applying real-world datasets to transformation, and the iterative upgrading of information
simulate network environments, the study systematically technology is constantly reshaping the economic structure and
evaluates the proposed method’s impact across various QoS social life. From smart homes to smart cities, from distance
metrics. Key findings reveal substantial enhancements in reducing education to telemedicine, every emerging application puts
average latency, minimizing packet loss, and boosting bandwidth higher requirements on network service quality. The network is
utilization compared to baseline scenarios, with the Deep not only a pipeline for data transmission but also a nervous
Deterministic Policy Gradient (DDPG) model showcasing the most system that supports the operation of society. Therefore,
notable improvements. The research demonstrates that AI ensuring the efficient, stable and secure operation of the network
optimization strategies, particularly those leveraging DQN and is directly related to the effectiveness and sustainability of
DDPG algorithms, significantly improve upon conventional digital transformation [1].
methods. Specifically, post-migration optimizations lead to a
recovery and even surpassing of pre-migration QoS levels, with With the commercial deployment of 5G technology and the
delays dropping to levels below initial readings, packet loss nearly initial launch of 6G research and development, mobile
eliminated, and bandwidth utilization markedly improved. The communications have entered a whole new stage of
study further illustrates that while lower learning rates necessitate development. Higher data rates, lower latency, and greater
longer convergence times, they ultimately facilitate superior model connection density are features that make cutting-edge
performance and stability. In-depth case studies within a cloud applications such as autonomous driving, Industry 4.0, and
data center setting underscore the system’s proficiency in immersive entertainment possible. However, at the same time,
handling large-scale Virtual Machine (VM) migrations with these applications demand an unprecedented level of network
minimal disruption to network performance. The AI-driven QoS. How to adjust network resource allocation in real-time and
optimization successfully mitigates the typical latency spikes, precisely to meet the differentiated demands of various
packet loss increases, and resource utilization dips associated with
applications in a complex and changing network environment
VM migrations, thereby affirming its practical value in
maintaining high network efficiency and stability during such
has become a key issue to be solved. Traditional network
operations. Comparative analyses against traditional traffic management relies on preset rules and manual intervention,
engineering methods, rule-based controls, and other machine which is difficult to adapt to the dynamic changes and
learning approaches consistently place the AI optimization complexity of the modern network environment. Statically
method ahead, achieving up to an 8% increase in throughput configured policies are often unable to flexibly respond to
alongside a 2 ms decrease in latency. Furthermore, the technique unexpected traffic, network congestion or failure events,
excels in reducing packet loss by 25% and elevating resource resulting in QoS degradation and impaired user experience [2].
utilization rates, underscoring its prowess in enhancing network
The rise of artificial intelligence, especially machine
efficiency and stability. Robustness and scalability assessments
validate the method’s applicability across diverse network scales,
learning and deep learning, has provided new ideas and tools for
traffic patterns, and congestion levels, confirming its adaptability network QoS optimization.AI is able to process massive
and effectiveness in a wide array of operational contexts. Overall, amounts of network data, learn network behavior patterns,
the research conclusively evidences the AI-driven QoS predict traffic trends, and automatically optimize network
optimization system’s capacity to tangibly enhance network configurations so as to achieve the purpose of improving
performance, positioning it as a highly efficacious solution for resource utilization, reducing latency, and enhancing stability.
contemporary networking challenges. Although AI has great potential in network QoS optimization,
the path to its realization is not smooth. How to effectively
Keywords—Artificial intelligence; networking; quality of combine AI algorithms with network engineering practices, how
service-oriented; data optimization to realize data-driven decision making while safeguarding
privacy and security, how to address the interpretive issues of
models to enhance trust, and how to harmonize across different
*Corresponding Author.
# Indicates co-first author, Zhenhua Yang and Qiwen Yang contributed
network architectures (e.g., cloud, edge, and end) are the main
equally to this work. challenges currently faced. Therefore, in-depth research on the

897 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

application of AI in QoS optimization is not only a need for minimal disruption to network performance. Comparative
technological innovation, but also an inevitable choice to analyses consistently show the AI optimization method
promote a robust digital society [3]. outperforming traditional traffic engineering methods, rule-
based controls, and other machine learning approaches.
In recent years, the research on network QoS optimization Additionally, the paper validates the method’s robustness and
and AI applications in communication networks has made scalability across diverse network scales, traffic patterns, and
significant progress. Early work focused on the establishment of congestion levels, confirming its adaptability and effectiveness
QoS models and the application of traditional optimization in various operational contexts.
algorithms, e.g., Ghafoor et al. [1] explored the QoS guarantee
mechanism based on the DiffServ model, while Babaei et al. [2] The paper is organized as follows: The literature review in
analyzed the application and limitations of the IntServ model in Section II provides an overview of existing research on AI-based
multimedia transmission. Subsequently, with the development QoS optimization, covering the application of machine learning,
of AI technology, the research focus has gradually shifted to deep learning, and reinforcement learning techniques. Finally,
utilizing AI to enhance network performance [3]. the current challenges and the future research directions are
proposed. The “AI-Driven QoS Optimization Methodology” in
In terms of traffic prediction, Alkanhel et al. [4] proposes a Section III details the proposed AI-driven QoS optimization
network traffic prediction model based on deep learning, which methodology. It is divided into two sections: A. Problem
effectively improves the prediction accuracy and provides a modeling: This section establishes the mathematical model of
basis for resource scheduling. A breakthrough has also been QoS optimization problem, including the objective function,
made in the application of AI in the field of resource allocation, constraint conditions and symbol definition. B. Method Design:
and Malhotra et al. [5] demonstrates a dynamic spectrum This section describes the design of a framework based on Deep
allocation scheme based on reinforcement learning, which Reinforcement Learning (DRL), including Deep Q-Networks
significantly improves the spectrum utilization. In addition, AI (DQN) and an actor criticism architecture. It covers the
also shows great potential in fault detection and self-healing algorithm principle, architecture design and parameter
network construction, e.g., the AI-assisted fault management
adjustment strategy. The experimental design in Section IV
system developed in the Bendavid et al. [6] is able to realize describes the simulation setup using real data sets, and compares
rapid localization and repair of network problems. Nevertheless, the QoS metrics before and after optimization to prove the
there are still some insufficiently addressed issues in existing effectiveness of the proposed AI optimization method in Section
research, such as the interpretability of AI models, V. Finally, the paper is concluded in Section VI.
generalization capabilities, and the challenges of applying them
in large-scale heterogeneous network environments. In addition, II. LITERATURE REVIEW
how to efficiently integrate AI with traditional network
management frameworks, as well as to ensure the transparency A. State of the Artificial Intelligence in QoS Data
and security of AI decisions, are also important issues in current Optimization
research [7]. This research is dedicated to analyzing the potential In recent years, the rapid development of Artificial
of Artificial Intelligence (AI) in the field of Quality of Service Intelligence (AI) techniques, especially Machine Learning
(QoS) optimization, focusing on three core issues: first, to (ML), Deep Learning (DL), and Reinforcement Learning (RL),
address the specific challenges of large-scale network has opened up new research paths and practice areas for network
environments, the research aims to design and implement an AI- Quality of Service (QoS) data optimization. The introduction of
driven QoS optimization framework to ensure that the AI has enabled network management to move toward
framework can adapt to the high dynamics and complexity of intelligence and automation, which it helps to build a self-
network environments, and at the same time effectively enhance optimizing and self-healing resilient network, and its specific
the deployment of QoS optimization frameworks in large-scale application mode is shown in Fig. 1.
networks, and to improve the efficiency of QoS optimization.
Performance in large-scale networks. Second, the study will AI
Improve the
explore in detail specific applications of deep learning and utilization
reinforcement learning models in accurately predicting network rate of
behavioral patterns, implementing intelligent resource resources.
Network
scheduling, and further optimizing the strategies of these models Deal
Learning behavior
to maximize the utilization efficiency of network resources and mode
the quality of service delivery.
Large scale Reduce
The strengths of this paper include a comprehensive AI- data latency.
driven QoS optimization method that is supported by extensive
experimentation and analysis using real-world datasets. The Forecast Flow
research systematically evaluates the method’s impact on
various QoS metrics and demonstrates significant improvements Enhance
compared to baseline scenarios. The study also includes in-depth stability
case studies within a cloud data center setting, showcasing the
system’s ability to handle large-scale VM migrations with Fig. 1. Application model of AI in QoS optimization.

898 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

This section will provide insights into how these techniques Although AI techniques have made significant achievements
can be applied to forecasting, decision making, and dynamic in QoS optimization, their practical application still faces a
management of network resources with a view to achieving series of challenges, such as model interpretability, acquisition
efficient, low-latency, and high-reliability data transmission. and quality of training data, and computational complexity of
Machine learning techniques are able to identify complex algorithms. Rani et al. [13] emphasized the importance of model
network behavior patterns by analyzing historical network data interpretability in real-world deployments, which is crucial for
in order to predict future network conditions. Kwon et al. [7] establishing regulatory trust and troubleshooting. Meanwhile,
used supervised learning methods to build models that Can et al. [14] discussed how to effectively collect and utilize
successfully predicted network traffic fluctuations, providing network data for model training while protecting user privacy.
network administrators with a valuable window to adjust The specific research findings are shown in Table I.
resource allocations in advance. In addition, unsupervised
learning and semi-supervised learning also show potential in TABLE I. SUMMARY OF RESEARCH RESULTS
anomaly detection and pattern recognition, which can help to Authors and
detect and respond to anomalous behaviors in the network in a Research Area Contributions
References
timely manner and maintain QoS standards [8]. Deep learning, Proposes a deep learning-based
especially Convolutional Neural Networks (CNNs) and network traffic prediction model,
Traffic Alkanhel et al.
Recurrent Neural Networks (RNNs), excel in processing Prediction [4]
significantly enhancing prediction
sequential data and high-dimensional features, and are widely accuracy and providing robust support
for proactive resource scheduling.
used for optimization of network data. Wang et al. [9]
demonstrated how RNN can effectively predict network Demonstrates a reinforcement
congestion, while Arunachalam et al. [10] modeled network learning-driven dynamic spectrum
Resource Malhotra et al.
allocation scheme, vastly improving
traffic by introducing a long short-term memory network Allocation [5]
the efficiency of spectrum resource
(LSTM), which not only improves the prediction accuracy, but utilization.
also dynamically adapts the data transmission strategy based on
Respectively explore QoS assurance
the prediction results to reduce delay and packet loss. These mechanisms based on the DiffServ
deep learning models are able to handle the time series Ghafoor et al. model and the application of the
QoS Assurance
characteristics of network data and provide more refined Mechanisms
[1], Babaei et IntServ model in multimedia
decision support for QoS optimization. Reinforcement learning al. [2] transmission, enriching the theoretical
has found a place in network resource management and and practical aspects of QoS
management.
scheduling with its ability to make decisions for optimization in
complex environments. Mehraban et al. [11] proposed a Develops an AI-assisted fault
Fault Detection
Bendavid et management system capable of rapid
dynamic bandwidth allocation algorithm based on and Self-Healing
al. [6] issue localization and repair,
reinforcement learning, which is capable of adjusting the policy Networks
enhancing operational efficiency.
according to the immediate feedback of the network state and
realizing the efficient allocation of resources. In addition, B. Deepening Analysis of QoS-Oriented Intelligent Data
Karasik et al. [12] trained the reinforcement learning model by Transmission Strategies
simulating the environment, enabling the network to adaptively In this section, this paper will further delve into intelligent
adjust the routing policy under different service demands and data transmission strategies, in particular how to refine and
network conditions, improving the overall QoS performance. optimize the data transmission process through advanced AI
The introduction of reinforcement learning enables the network techniques to ensure superior quality of service (QoS) in the
optimization strategy to adapt more flexibly to changes in the network. This paper will focus on three key areas: intelligent
network state, realizing the transition from reactive to proactive routing, dynamic adaptive transmission techniques, and the
optimization. The amount of literature on the application of integrated application of AI in end-to-end QoS assurance, while
different techniques in QoS data optimization is specifically also discussing the challenges and future directions of these
shown in Fig. 2. strategies.
Intelligent routing is a core component of AI-based network
2022 optimization strategies. While traditional routing protocols tend
2021 to decide the forwarding path of packets based on simple path
costs, AI techniques, especially deep and reinforcement
2020 learning, can provide more dynamic and strategic routing
decisions. For example, Li and Zhang [15] proposed a routing
2019 algorithm based on deep reinforcement learning, which can
dynamically adjust the routing path according to the network
0 50 100 150 200
state and traffic demand, effectively reducing network
RL DL congestion and improving transmission efficiency. Intelligent
routing not only considers direct QoS metrics such as delay and
Fig. 2. Number of literature on the application of different techniques in QoS
packet loss, but also learns and predicts the future state of the
data optimization. network to achieve forward-looking route optimization.

899 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

Dynamic adaptive transmission technique is a key strategy diversified. Therefore, how to design an AI-driven QoS
to automatically adjust data transmission parameters (e.g., optimization framework adapted to future network
coding rate, slice size, etc.) for different network conditions and characteristics has become a hot research topic. Khasawneh et
application requirements. In application scenarios such as video al. [20] explored how to utilize AI technology to achieve QoS
streaming and real-time communication, Chen et al. [16] assurance with ultra-low latency, high reliability and large-scale
realizes real-time monitoring and prediction of network connectivity in a 6G network environment, and proposed an
conditions by integrating machine learning models, and intent-driven network management framework based on an
dynamically adjusts transmission strategies to maintain the best intent-driven network management framework, which is able to
user experience. End-to-end QoS guarantees require automatically adjust the network configuration according to the
performance optimization across the entire data transmission user’s intent and service level agreements (SLAs) to ensure end-
link, from the data source to the destination. The application of to-end QoS consistency.
AI techniques at this level, as shown in Kimbugwe et al.’s study
[17], achieves optimal allocation of resources by constructing a With the in-depth application of AI in QoS optimization,
global optimization model, which integrates multiple QoS data security and user privacy protection become issues that
metrics in the network. In addition, AI can help achieve cross- cannot be ignored. Osman et al. [21] explored how to ensure the
layer optimization, i.e., building bridges between the physical, secure transmission and processing of data by means of
network and application layers to ensure overall QoS encryption technology and differential privacy while
consistency and reliability. This chain-wide intelligent guaranteeing QoS, and how to design privacy-protecting AI
management is an important trend in future network service models to reduce the reliance on users’ personal information,
assurance. which is crucial for enhancing user trust and promoting the
application of AI in QoS optimization.
Although AI shows great potential in intelligent data transfer
strategies, it still faces many challenges, including but not III. AI-DRIVEN QOS OPTIMIZATION METHODS
limited to model complexity and interpretability issues, data A. Problem Modeling
privacy protection, and robustness in dynamic and
heterogeneous network environments. To further advance the In this section, the mathematical model of the problem will
application of AI techniques in QoS optimization, future be elaborated in detail, including the establishment of the
research needs to explore more efficient model training objective function, the setting of constraints, and the
methods, enhance model interpretability, ensure data processing introduction of the necessary notational definitions, with a view
privacy, and develop adaptive AI models that can adapt to rapid to forming a comprehensive and rigorous modeling framework,
changes in the network environment. which is shown in Fig. 3. In this model, N denotes the total
number of nodes in the network. e denotes the set of edges in the
C. Recent Advances and Future Trends in Artificial network, and each edge e  E associates two nodes and denotes
Intelligence for QoS Optimization the data transmission path. Ce denotes the capacity of edge e,
In recent years, researchers are no longer limited to a single i.e., the maximum data transfer rate. d ij denotes the delay from
AI technique, but explore the integration of multiple advanced
AI models and algorithms with the aim of achieving deeper node i to node j. f ij denotes the data traffic flowing through edge
intelligence in QoS optimization. For example, Huang and Li e=(i, j). q denotes the set of quality of service metrics, including
[18] combined deep learning and reinforcement learning to but not limited to average delay, packet loss rate, throughput,
develop a hybrid model for achieving more accurate network etc. wQ denotes the weight of the quality of service metrics Q,
traffic prediction and resource scheduling, which significantly which reflects the importance of each metric to the overall
improved network efficiency and user experience. This trend of optimization objective. r denotes the set of available resources,
cross-domain technology convergence is not limited to the including bandwidth, computational resources, etc. x denotes
algorithms themselves, but also includes deep integration with the set of decision variables, which represent policy parameters
network theory, providing unprecedented accuracy and such as resource allocation, routing, etc.
flexibility for QoS optimization.
Our objective is to maximize the integrated quality of service
With the deep application of AI technology in QoS metrics, considering the possible conflicts between different
optimization, the “black-box” nature of its decision-making has QoS metrics, a weighted summing approach is used to combine
become a problem that cannot be ignored. To address this them. The objective function can be expressed as Eq. (1).
challenge, research has begun to favor the development of
highly interpretable AI models to enhance the transparency and max x w Q  U Q ( x) (1)
controllability of network management. In Yang et al.’s study QQ
[19], the authors propose an explanatory machine learning-
based approach that optimizes network parameters while The constraints include resource constraints, quality of
providing clear explanations of the decision-making process, service constraints, traffic conservation non-negative traffic and
facilitating network administrators to understand and trust the so on.
AI-generated policies, and promoting the practical application 1) Resource constraints: Ensure that all resource
and acceptance of the technology. allocations do not exceed the total amount, for example, for
Facing the upcoming 6G era, the network architecture will bandwidth resources. This is shown in Eq. (2).
be more complex and the service demands will be more

900 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024


( i , j )E
fij  Ce , e  E (2)
architectures, in order to ac hieve more efficient and adaptive
QoS optimization strategies in complex network environments.
This section not only covers the principles of the algorithms, but
QoS Problem also the architectures of the DQN and the Actor-Critic
architectures, in order to achieve more efficient and adaptive
QoS optimization strategies in complex network environments.
This section not only covers the algorithm principles and
Maximization of architecture design, but also delves into the selection of key
Goal comprehensive service parameters and tuning strategies, with a view to providing
quality index. readers with a comprehensive and in-depth understanding. Deep
reinforcement learning combines the powerful representation
capability of deep learning and the decision-making strategy of
reinforcement learning, and is able to deal with problems with
high-dimensional input space and complex action space. In QoS
optimization scenarios, DRL models learn by interacting with
Resource constraint. the environment and automatically discover optimal policies to
maximize long-term cumulative rewards, which are directly tied
to QoS metrics such as latency, throughput, and packet loss.
In the scenario where DQN is applied to QoS optimization,
its core mathematical framework is first clarified. Given a
Constraint. Quality of service Markov Decision Process (MDP), denoted as (S , A , P, r,  ) ,
constraint. where S is the state space, A is the action space, P(s∣  s, a )
denotes the state transfer probability, r (s, a) is the instantaneous
reward function, and   [0,1) is the discount factor, the DQN
aims to learn a policy,  (a | s; ) , to optimize the network
Flow conservation performance by maximizing the expected cumulative
non-negative flow discounted rewards. This is shown in Eq. (6).

T 
J ( )  E s0 , a0 ,..., sT    t r ( st , at )  (6)
 t 0 
Fig. 3. Modeling of QoS problem.
where, st and at represent, respectively, the state and
2) Quality of Service constraints: Ensure that all QoS
metrics satisfy predetermined thresholds. This is shown in Eq. action executed at the tth moment. Executed action, and T is the
(3). end point of the time series. The DQN approximates the optimal
action value function Q(s, a) by using a deep neural network

( i , j ) p
dij  f ij Q(s, a; ) and employs empirical replay and fixed-objective
Dmax :  Dmax , p is path (3) network tricks to stabilize the learning process. Specifically for

( i , j ) p
fij
the state representation, it is assumed that each state st consists
of a series of feature vectors xt  [ xt ,1 , xt ,2 ,..., xt , n ] , which may
T
3) Traffic conservation: At each node in the network, the
incoming traffic is equal to the outgoing traffic in order to include network load, latency, packet loss rate, etc. The action
ensure the correct transmission of the data. This is shown in Eq. space is based on the actual scenario. The action space A is then
(4). defined based on practical application scenarios, such as
different path choices or bandwidth allocation schemes [23, 24].

j :( j , i )E
f ji  
j :( i , j )E
fij , i  N (4) For the continuous action space, this paper turn to the Actor-
Critic architecture, which consists of two parts: an Actor
4) Non-negative traffic: Traffic flowing through any edge network  (s; ) for generating the action distribution
must be non-negative. This is shown in Eq. (5) [22].  (a | s)  N ( (s; ),  2 , where  is the network parameter and
fij  0, (i, j )  E (5)  is the standard deviation of the action noise, and a Critic
network \(Q(s, a; \theta)\) evaluating the goodness of the current
B. Methodological Design strategy, i.e., the value of the action.
In this section, this paper will explore the potential of AI in The learning objective of the Critic network is to minimize
QoS optimization by elaborating the design of Deep the Temporal Difference Error (TD Error), i.e. This is shown in
Reinforcement Learning (DRL)-based frameworks, with a Eq. (7).
special focus on the Deep Q-Network (DQN) and Actor-Critic

901 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

(r   Q( s ,  ( s ;  );  )  paper adopt Docker containers and Kubernetes orchestration to


L ( )  E s , a , r , s    (7) realize efficient deployment, flexible expansion and high
 Q( s, a; )) 
2
availability configuration of services, providing strong software
support for stable system operation.
where,   represents the parameters of the target network to
reduce the training fluctuations. The Actor network, on the other
Data acquisition layer Network monitoring tool.
hand, updates the policy gradient based on Critic’s feedback to
maximize the expected return. This is shown in Eq. (8).
 J ( )  E s ~    a Q( s, a; ) |a   ( s ; )   ( s;  )  (8) Data processing and feature Algorithm of feature
selection Engineering layer.
Here,   is the frequency of state access under the policy 
Deep reinforcement
[25]. AI model training module Learning (DQN, DDPG)
For parameter tuning and model optimization, this paper training model.
maintain an empirical playback pool of size N D , from which a
small batch of samples are randomly drawn from D for learning Policy enforcement and
in each iteration to enhance the stability of learning. This paper SDN architecture
regulation layer.
introduce a soft update mechanism with target network
parameters      (1   )  , where  1 , ensures a smooth
Monitoring and feedback Closed-loop control
transition of learning. This paper employ noise injection loop mechanism. system
mechanisms, such as the Ornstein-Uhlenbeck process, to add
exploratory properties to the Actor network, especially in the
early stages of learning. This paper use reward clipping and Hardware and software
Docker Kubernetes.
normalization to appropriately clip and normalize the reward configuration
signal to avoid extreme values affecting the learning stability.
C. Realization Framework Security measures Encryption protocol.
The purpose of this section is to deeply explore and
exhaustively depict the all-encompassing blueprint of AI-driven
QoS optimization system from architectural design to Transfer learning adaptive
deployment practice, aiming to provide a detailed and Core strategy optimization
learning
comprehensive operation manual for creating intelligent and
efficient network performance optimization solutions. By Fig. 4. Realization framework.
integrating advanced technologies and strategies, it ensures that
the network quality of service always maintains excellent In planning the implementation path of the AI-driven QoS
performance in complex and changing environments. The optimization system, this paper adopted a phased, step-by-step
specific implementation framework is shown in Fig. 4. In the strategy to ensure the robustness, performance, and close
data collection layer, this paper utilize advanced network alignment with real-world business requirements. First, in the
monitoring tools, such as network sniffers and SNMP protocols, prototype validation phase, this paper use simulation data in a
to capture the core data of network activities in real time, highly controlled experimental environment. This phase focuses
including traffic dynamics, latency conditions, and packet loss on verifying the fundamental functionality and stability of the
rates, etc., to provide a rich and realistic data source for AI model system, and fine-tuning the model parameters to build a solid
training. At the data processing and feature engineering layer, foundation that matches the theory and practice, thus laying a
this paper implement in-depth data cleaning and format reliable foundation for the subsequent steps. This paper then
standardization, and with advanced feature selection algorithms, move on to small-scale pilot deployments, where this paper
this paper accurately refine the most critical metrics affecting carefully select non-core business areas as the testing ground for
QoS to provide highly optimized input feature sets for the the first real-world tests. The goal of this phase is to collect
model. In the AI model training module layer, this paper adopt operational data in a real network environment to verify the
cutting-edge deep reinforcement learning techniques, such as actual performance and stability of the system, and at the same
DQN and DDPG, to design and train models that can accurately time, accumulate strategic insights and adjustment directions for
predict and make decisions, and formulate optimal resource the full-scale rollout of the system through these valuable
allocation and routing policies for the current network state [26]. practical experiences. The next step is to gradually expand the
In terms of hardware configuration optimization, this paper scope of deployment. Based on the feedback and learning from
ensure that the computing cluster is equipped with high- the pilot phase, this paper continue to optimize the system
performance processors and sufficient memory to support the performance and follow the established plan to expand the
high-intensity training needs of DRL models, and at the same system deployment to a wider range of network areas. This
time, the network infrastructure needs to be compatible with phase emphasizes a smooth transition and long-term stability of
SDN to lay a hardware foundation for dynamic network the system, and this paper strive to make every expansion step a
regulation. In terms of software environment construction, this solid one. Finally, this paper are committed to continuous

902 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

iteration and optimization, building a comprehensive and policy configuration. The experimental environment was
monitoring ecosystem that continuously collects and analyzes created using the Mininet simulator, ensuring reproducibility
system operational data, and periodically retrains and tunes the and flexibility.
model based on this data feedback. This strategy ensures that the
QoS optimization system can keep up with the times and The dataset is derived from two parts: first, publicly
continuously adapt to the changes in the network environment available network traffic datasets, such as CAIDA and MAWI,
and the growth of business demands, so as to continuously which contain network traffic characteristics of different time
improve the quality of service in long-term operation and periods and application types; and second, data collected in real
maintenance, and provide users with a better and more stable time in the laboratory network by a self-designed network
network experience [27]. sniffing tool to capture network behavioral characteristics of the
actual working environment. Data preprocessing steps include
In order to comprehensively improve the performance and removing outliers and noise, such as extreme data points due to
practicality of AI-driven QoS optimization system, this paper network failures [28].
are committed to the implementation and optimization of three
core strategies. First, this paper focus on improving the For a comprehensive and detailed evaluation, the
generalization ability of the model by innovatively incorporating experimental design incorporates multi-dimensional parameter
migration learning and adaptive learning mechanisms. This configurations and comparative analyses, aiming to provide
strategy enables the model to quickly learn from past insights into the efficacy of AI-driven QoS optimization
experiences and adapt to new environments and scenarios, systems. Specifically, the experiments compare the performance
ensuring that it can still make accurate and efficient decisions differences between advanced deep reinforcement learning
under changing network conditions, and thus maintain excellent algorithms, including DQN and DDPG, and traditional policy
performance in diverse application instances. Second, focusing approaches, such as predefined rule-based policies, in cloud data
on the efficient allocation of resources, this paper adopt a fine- center VM migration scenarios. The study is not limited to the
grained computational resource management strategy to choice of algorithms, but also cleverly tunes the flexible interval
of bandwidth allocation, which spans from 20% of network
scientifically plan the ratio of resource allocation between model
training and real-time network regulation, which can both resources to 100% of the full amount, as a way to explore the
Secondly, focusing on efficient resource allocation, this paper potential impact of different resource quotas on system
adopt a refined computing resource management strategy to performance. At the routing policy level, the experiments also
scientifically plan the resource allocation ratio between model consider diverse policy options, such as the shortest path policy
training and real-time network regulation, which not only meets that seeks to minimize latency and the load balancing policy that
the demand of model complexity growth, but also ensures the aims to balance the network load, to evaluate their relative
real-time responsiveness of network regulation, and realizes the effectiveness in ensuring QoS. For the deep reinforcement
maximization of resource utilization efficiency and system learning model adopted, the experiments are further refined by
performance. Finally, this paper deeply understand that the close carefully selecting three different learning rates (0.001, 0.0001,
integration of technological innovation and business 0.00001), with the intention of analyzing the role of the learning
requirements is the key to success. Therefore, this paper actively rate, which is a hyperparameter, on the learning process and
promote collaboration within the organization, establish a solid convergence efficiency of the model. Such a design not only
bridge between the information technology department and the reveals the optimal learning rate setting, but also helps to
business department, and ensure that each step of technical understand the trend of model performance under different
implementation can accurately match the business requirements learning rates, thus providing a scientific basis for achieving
through a regular cross-departmental communication and more efficient network resource management and optimization
collaboration mechanism, so as to jointly promote the smooth [29, 30].
implementation of the QoS optimization project and its We set up three control groups respectively (1) Baseline
continuous iteration, and ultimately achieve a significant group: traditional traffic management and QoS guarantee
enhancement of business continuity and user experience. mechanisms such as TCP/IP congestion control algorithms are
used. (2) Optimization group: integrating an AI-driven
IV. EXPERIMENTAL DESIGN AND ANALYSIS OF RESULTS optimization system to test the performance of DQN and DDPG
A. Experimental Environment and Dataset models in different network environments, respectively. (3)
Hybrid group: combining traditional methods with AI strategies
This chapter will thoroughly introduce the specific to explore complementary advantages.
environment configuration of the experiment, the selection of
the data set and its pre-processing process, laying a solid Before presenting the tables in Section IV, it is essential to
foundation for the subsequent experimental setup and result define the performance indicators mathematically to provide a
analysis. clear understanding of how these metrics are calculated and
interpreted.
The experiment was carried out in a network lab
environment, simulating a medium-sized enterprise scale The performance indicators studied are as follows:
network architecture containing 100 end nodes connected to the
core switch through 10 routers, forming a typical hierarchical 1) Average latency: The average time taken for a packet to
network structure. The network devices all support SDN travel from its source to destination, measured in milliseconds
(Software Defined Networking), allowing flexible traffic control (ms). It is calculated as Eq. (9).

903 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

N optimization system, particularly in managing the complexities


 Li (9)
of large-scale VM migrations. The restoration and surpassing
Lavg  i 1 of pre-migration QoS levels, as evidenced by reduced latency,
N decreased packet loss, and increased bandwidth utilization post-
optimization, demonstrate the system’s capability to handle
where, represents the latency of the i-th packet, and N is
Li real-world challenges effectively. This has broad implications
the total number of packets considered. for industries relying heavily on cloud infrastructure, promising
smoother operations and improved user experience during
2) Packet Loss Rate (PLR): The percentage of packets that maintenance and resource allocation adjustments.
do not reach their intended destination, indicating network 2) Comparison and competitive advantage: The
congestion or errors. It is defined as Eq. (10). comparative analysis against traditional and machine learning-
based optimization methodologies firmly establishes the
P 
PLR(%)   lost  100 (10) superiority of the AI solution. The demonstrated capacity to
 Ptotal  significantly enhance throughput while reducing latency and
packet loss, as shown in Tables VIII and IX, positions the
where, is the number of lost packets, and is the proposed method as a leading candidate for future network
Plost Ptotal
optimization strategies. It confirms that AI can bring about
total number of packets sent.
transformative advancements in network management by
3) Bandwidth Utilization (BU): The ratio of the actual data surpassing the limits of conventional techniques.
transferred over a network link to the maximum capacity of that 3) Robustness and scalability assessment: The experiments
link, reflecting how efficiently the network resources are being simulating diverse network conditions confirm the method’s
used. It is expressed as Eq. (11). robustness and scalability. Despite slight reductions in absolute
delay improvement with increasing network size, the consistent
 Datatransferred  decline in average latency validates the method’s effectiveness
BU (%)    100 (11)
 Bandwidth across networks of varying scales. Additionally, the system’s
 capacity  ability to maintain relatively better QoS levels across different
These mathematical definitions set the groundwork for the traffic patterns and congestion degrees (see Table XI)
subsequent presentation of experimental results, allowing for a underscores its adaptability and resilience, which are critical for
precise quantification and comparison of the impact of different modern dynamic networks.
optimization strategies on network performance.
In conclusion, the comprehensive analysis affirms the AI-
B. Discussion driven QoS optimization system’s potential to revolutionize
The results presented highlight the profound impact of the network management by delivering substantial performance
AI-driven QoS optimization system across various dimensions enhancements. Its effectiveness across multiple metrics,
of network performance. This discussion delves deeper into the adaptability to various network conditions, and demonstrated
implications of these findings and their significance for the field superiority over existing methods make it a compelling choice
of network management. for future network optimization endeavors. However, ongoing
research should continue to explore avenues for further
AI Optimization Strategies’ Efficacy: the analysis performance refinements, particularly in the realms of model
underscores the remarkable improvements delivered by the interpretability, rapid adaptation to unforeseen network
DQN and DDPG optimization groups, with DDPG standing out dynamics, and ensuring seamless integration with existing
for its exceptional performance in reducing average delay, network infrastructures.
packet loss, and enhancing bandwidth utilization. This not only
validates the suitability of deep reinforcement learning for QoS C. Analysis of Results
optimization tasks but also indicates the potential for further In this section, the efficacy of the AI-driven QoS
refinement in algorithm selection to maximize benefits. optimization system will be analyzed in depth through a series
of experimental results demonstration, including the
Learning Rate Insights: The convergence speed and stability improvement of key metrics, algorithm performance evaluation
analysis (Table III) provides crucial insights into the trade-off and convergence analysis.
between convergence speed and final performance levels. The
observation that smaller learning rates lead to higher As can be seen from Table ⅠI, both the DQN and DDPG
performance, despite prolonged convergence, suggests a need optimized groups show significant improvements in terms of
for careful consideration of learning rate tuning in practical reduced average latency, reduced packet loss rate, and increased
implementations. This finding underlines the importance of bandwidth utilization compared to the baseline group, with the
patience in the training phase to achieve optimal model DDPG optimized group showing the best performance.
performance.
Table ⅡI demonstrates the convergence speed and final
1) Case study significance: The cloud data center scenario reward values of DQN and DDPG models with different
showcases the practical utility of the AI-driven QoS learning rates, showing that smaller learning rates, although

904 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

prolonging the convergence time, help the models to reach machines from one physical host to another without affecting
higher performance levels, especially the DDPG model is more service, is critical to maintaining Load Balancer in the data
stable and has higher reward values at lower learning rates. The center, improving resource utilization, and performing
curve of the iterative process is shown in Fig. 5. maintenance operations. However, this process, if not handled
properly, can have a significant impact on network performance,
TABLE II. COMPARISON OF QOS METRICS UNDER DIFFERENT such as increased latency, bandwidth consumption, or
OPTIMIZATION SCHEMES temporary service outages.
DQN DDPG
Baseline Mixed
Norm Optimization Optimization TABLE IV. CHANGES IN QOS METRICS BEFORE AND AFTER VIRTUAL
Group group MACHINE MIGRATION
Group Group
Average
23.45 18.67 17.92 19.58 Post- post-
delay (ms) Pre-
Norm Relocating migration (no migration
Packet Loss migration
0.48 0.31 0.26 0.36 optimization) (optimization)
(%)
Average
Bandwidth 21.34 45.67 28.78 20.89
delay (ms)
utilization 78.96 85.12 86.47 82.74
Packet
(%) 0.23 0.87 0.42 0.28
Loss (%)
Note: All values are experimental averages.
Bandwidth
utilization 83.72 69.45 81.95 87.41
TABLE III. MODEL CONVERGENCE SPEED AND STABILITY ANALYSIS (%)
Learning Average Convergence time Final Award As shown in Table IV, the delay during migration increases
Mould
rate (epoch) Value
significantly to 45.67 ms, but through AI optimization, the delay
DQN 0.001 250 187.6
after migration not only recovers to a level close to the pre-
DQN 0.0001 300 195.4 migration level (20.89 ms), but even outperforms the initial state
DQN 0.00001 400 196.7 (21.34 ms), which indicates that the AI algorithm effectively
manages network bottlenecks in the migration and reduces the
DDPG 0.001 350 205.8 waiting time for data transmission. The packet loss rate spikes
DDPG 0.0001 450 210.2 to 0.87% in the migration, but after optimization, the packet loss
DDPG 0.00001 500 211.2 rate drops to 0.28%, which is close to the pre-migration rate of
0.23%, indicating that the AI strategy effectively identifies and
alleviates network congestion and ensures stable packet
transmission. The utilization rate plummets in the migration, but
through optimization, it eventually improves to 87.41%, which
not only exceeds the pre-migration level (83.72%), but also
significantly improves the efficiency of network resource usage.

TABLE V. IMPACT OF OPTIMIZATION STRATEGIES ON VM MIGRATION


LATENCY

Be tactful Percentage increase in delay


No optimization +34.89%
AI optimization -2.33%

Fig. 5. Curve of iterative process. As shown in Table V, the no-optimization strategy leads to
a delay increase of 34.89%, emphasizing the negative impact of
D. Case Studies the migration operation itself on network performance. The AI
A cloud data center is selected as an application scenario to optimization strategy, on the other hand, not only avoids the
analyze the network performance impact of AI-driven QoS delay increase, but instead achieves a delay reduction of -2.33%,
optimization system in handling large-scale VM migration. highlighting the advantages of the AI algorithm in dynamically
adjusting network resources and path selection.
Cloud data centers are centralized remote facilities used to
host a large number of Internet-based applications and services. TABLE VI. COMPARISON OF PACKET LOSS RATE BEFORE AND AFTER
They are equipped with advanced hardware resources, including OPTIMIZATION
high-performance servers, storage devices and network
Change in packet loss
equipment, all designed to provide elastic computing power and State of affairs
rate
storage services. Virtualization plays a central role in cloud data In-migration to post-migration (no
centers, allowing physical resources to be abstracted into +0.19%
optimization)
multiple virtual machines (VMs) for efficient resource In-migration to post-migration (optimization) -0.59%
utilization and flexible management.AI-driven QoS (Quality of
Service) optimization systems are particularly important in this As shown in Table ⅤI, the packet loss rate increases by
context, especially when dealing with large-scale VM 0.19% from the no optimization state during to after migration,
migrations. VM migration, which moves running virtual indicating that migration has a negative impact on network

905 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

stability. However, after AI optimization, the packet loss rate experiments to examine the performance under different
decreased by 0.59%, proving that the AI strategy effectively network conditions (e.g., network size, traffic pattern, network
improves the reliability of network transmission. congestion level).
As shown in Table ⅥI, in the no-optimization state, the TABLE IX. COMPARISON OF PACKET LOSS RATE AND RESOURCE
utilization rate after the migration is recovered compared to that UTILIZATION OF DIFFERENT OPTIMIZATION METHODS
in the migration, but the overall decrease is 4.27%, which shows
Average packet loss Resource utilization rate
the challenge of resource scheduling and network tuning. The Methodologies
(%) (%)
AI optimization strategy not only recovers this loss, but also Traditional flow
improves bandwidth utilization by an additional 7.96%, 0.5 85
engineering methods
demonstrating the ability of AI in efficient resource allocation. Rule-based QoS
0.3 87
Control Policy
TABLE VII. ANALYSIS OF CHANGES IN BANDWIDTH UTILIZATION Machine Learning
0.2 90
Approach A (MLA)
State of affairs Change in utilization rate AI optimization
0.15 92
In-migration to post-migration (no methods in this study
-4.27%
optimization)
In-migration to post-migration (optimization) +7.96% As shown in Table X, as the network size increases, although
the absolute delay reduction decreases, the optimized average
V. PERFORMANCE EVALUATION AND DISCUSSION delay still maintains a significant decreasing trend, which proves
the effectiveness and scalability of the method in networks of
A. Comparative Analysis different sizes.
In order to comprehensively evaluate the superiority of the
proposed AI-driven QoS optimization method, this chapter TABLE X. OPTIMIZATION EFFECT WITH DIFFERENT NETWORK SIZES
provides an in-depth comparison with several mainstream Average latency before Average latency after
Network size
techniques within the current network optimization field through optimization (ms) optimization (ms)
comparative analysis, including traditional traffic engineering Small scale (50
24 18
methods, rule-based QoS control strategies, and some recent nodes)
machine learning-based optimization algorithms. The Medium (100 nodes) 30 22
evaluation metrics involve key QoS metrics such as throughput, Large scale (200
38 30
nodes)
delay, packet loss and resource utilization.
As seen in Table ⅦI, the AI-based optimization method TABLE XI. COMPARISON OF QOS PERFORMANCE FOR DIFFERENT TRAFFIC
significantly reduces the average latency while improving the PATTERNS AND NETWORK CONGESTION LEVELS
network throughput compared to the traditional methods. In Traffic Average Packet
Degree of Optimization Throughput
particular, the AI optimization method proposed in this study pattern
congestion methods
delay Loss
(Mbps)
further improves the throughput by about 8% and reduces the (ms) (%)
latency by 2 ms compared to the recent machine learning lower
Sudden AI
(one’s 20 0.2 1800
method A, showing stronger optimization results. outburst
head)
optimization
AI
TABLE VIII. THROUGHPUT VS. LATENCY COMPARISON OF DIFFERENT center 28 0.4 1600
optimization
OPTIMIZATION METHODS your AI
40 0.6 1400
Average Throughput Average (honorific) optimization
Methodologies lower
(Mbps) delay (ms) AI
Traditional flow engineering Constant (one’s 18 0.1 1900
1500 32 optimization
methods head)
Rule-based QoS Control Policy 1600 30 AI
center 25 0.3 1700
optimization
Machine Learning Approach A
1750 28 your AI
(MLA) 35 0.5 1500
(honorific) optimization
AI optimization methods in this
1900 26 lower
study Periodicity AI
(one’s 22 0.15 1850
(math) optimization
The data in Table IX shows that the AI optimization method head)
AI
in this study also achieved significant results in reducing the center
optimization
29 0.35 1650
packet loss rate and improving resource utilization. Compared your AI
with machine learning method A, the packet loss rate is reduced 38 0.65 1350
(honorific) optimization
by 25% and the resource utilization rate is increased by 2 Comparison
Traditional 0.5-
percentage points, indicating that the AI algorithm has obvious of method All cases 30-50 1400-1500
methods 1.0
advantages in the optimization of efficient resource utilization means
and network stability. Table ⅩI shows the performance of the AI optimization
B. Robustness and Scalability Analysis approach compared to the traditional optimization approach for
three different traffic patterns (bursty, constant, and periodic)
In order to verify the robustness and scalability of the and three different levels of network congestion (low, medium,
proposed method,this paper design a series of simulation

906 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

and high). For the AI optimization approach, under all test requires substantial computational resources and time for
conditions, although the average latency and packet loss rate training, which could be a hurdle for immediate deployment in
increase and the throughput decreases as the network congestion resource-constrained environments.
level increases, the AI optimization approach shows better
adaptability and performance retention under high congestion Looking forward, there are ample opportunities to enhance
compared to the traditional approach, as reflected in lower the approach. Integrating advanced AI techniques, such as
latency growth, lower packet loss rate, and higher throughput federated learning and transfer learning, could enhance model
retention level. adaptability and learning efficiency across diverse network
ecosystems. Exploring the fusion of explainable AI (XAI) would
C. Limitations and Challenges facilitate understanding the decision-making logic behind
Despite the significant performance improvement, the AI- optimization strategies, thereby increasing trust and facilitating
driven QoS optimization method in this study still has some regulatory compliance. Moreover, extending the framework to
limitations, which are mainly reflected in the following aspects: address emerging networking challenges, like ensuring QoS in
(1) Model training cost: the training of deep learning models edge computing and dealing with the complexities of 6G
requires a large amount of data and computational resources, networks, is a promising direction for future research.
which may pose a challenge for resource-limited network Continuous refinement and validation through collaborations
environments. (2) Model Interpretability: The “black-box” with industry partners will be crucial in translating these
nature of deep learning models limits the understanding of their advancements into tangible improvements in global network
decision-making process, which affects the trust and decision operations and user satisfaction.
support of network administrators. (3) Dynamic Adaptability: ACKNOWLEDGMENT
Although the model shows good adaptability, its immediate
response and adaptation strategies remain to be optimized in the This study was supported by Hunan Province Social Science
face of extreme network events (e.g., large-scale DDoS attacks). Project “Research on Artificial Intelligence Assisting Deep
(4) Data privacy and security: how to protect user privacy and Integration of Education and Teaching” (No. 804).
data security when collecting and processing network data is a
key concern in the future. REFERENCES
[1] K. Z. Ghafoor, L. H. Kong, D. B. Rawat, E. Hosseini, A. S. Sadiq,
VI. CONCLUSION “Quality of service aware routing protocol in software-defined internet of
vehicles,” IEEE Internet Things J, vol. 6, no. 2, pp. 2817-2828, April 2019.
This study successfully demonstrates the great potential and [2] A. Babaei, M. Khedmati, M. R. A. Jokar, E. B. Tirkolaee, “Sustainable
practical application value of AI techniques, especially deep transportation planning considering traffic congestion and uncertain
reinforcement learning, in the field of network QoS conditions,” Expert Syst. Appl, vol. 227, pp. 119792, January 2023.
optimization. By constructing a rigorous mathematical [3] Y. L. Huang, “Grid quality of service trustworthiness evaluation based on
modeling framework and combining deep reinforcement Bayesian network,” IEEE Access, vol. 8, pp. 15768-15780, February
learning algorithms with deep Q-networks and actor-critic 2020.
architectures, this paper design and implement a set of efficient [4] R. Alkanhel, E. M. El-kenawy, A. A. Abdelhamid, A. Ibrahim, M.
Abotaleb, D. S. Khafaga, “Dipper throated optimization for detecting
and adaptive QoS optimization strategies. Experimental results black-hole attacks in Manets,” CMC-Comput. Mater. Continua, vol. 74,
clearly demonstrate that the approach can significantly improve no. 1, pp. 1905-1921, March 2023.
network key performance indicators, including reducing [5] A. Malhotra, S. Kaur, “A quality of service-aware routing protocol for
average latency, lowering packet loss rate, and improving FANETs,” Int. J. Commun. Syst, pp. e5723, December 2024.
bandwidth utilization, especially when responding to [6] I. Bendavid, Y. N. Marmor, B. Shnits, “Developing an optimal
dynamically changing network environments and complex appointment scheduling for systems with rigid standby time under pre-
business demands, showing excellent performance and determined quality of service,” Flex. Serv. Manuf. J, vol. 30, no. 1-2, pp.
adaptability. The case study further confirms the efficiency of 54-77, January-February 2018.
the AI optimization system in handling complex scenarios such [7] S. Kwon, “Ensuring renewable energy utilization with quality of service
guarantee for energy-efficient data center operations,” Appl. Energy, vol.
as virtual machine migration in cloud data centers, effectively 276, pp. 115424, June 2020.
mitigating performance fluctuations triggered by network [8] R. Li, C. Huang, X. Q. Qin, S. P. Jiang, N. Ma, S. G. Cui, “Coexistence
migration and safeguarding user experience. The extensive between task- and data-oriented communications: a Whittle’s index
comparisons in the performance evaluation section not only guided Multiagent reinforcement learning approach,” IEEE Internet
confirm the significant advantages of the AI-driven approach Things J, vol. 11, no. 2, pp. 2630-2647, March 2024.
over traditional means, but also delve into its robustness and [9] C. P. Wang, S. H. Fang, H. C. Wu, S. M. Chiou, W. H. Kuo, P. C. Lin,
scalability under different network sizes, traffic patterns, and “Novel user-placement ushering mechanism to improve quality-of-
service for femtocell networks,” IEEE Syst. J, vol. 12, no. 2, pp. 1993-
levels of congestion, laying a solid theoretical and practical 2004, June 2018.
foundation for the widespread application of AI in real-world
[10] K. Arunachalam, S. Thangamuthu, V. Shanmugam, M. Raju, K. Premraj,
network operations. “Deep learning and optimisation for quality of service modelling,” J. King
Saud Univ.- Comput. Inf. Sci, vol. 34, no. 8, pp. 5998-6007, August 2022.
Despite the remarkable achievements showcased, this study
[11] S. Mehraban, R. K. Yadav, “Traffic engineering and quality of service in
acknowledges several limitations. Primarily, the dynamic nature hybrid software defined networks,” China Commun, vol. 21, no. 2, pp.
of real-world networks poses challenges in modeling all possible 96-121, April 2024.
scenarios, which may limit the generalizability of the model to [12] R. Karasik, O. Simeone, H. Jang, S. S. Shitz, “Learning to broadcast for
unforeseen network conditions. Furthermore, while deep ultra-reliable communication with differential quality of service via the
reinforcement learning excels in adaptive decision-making, it

907 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 15, No. 6, 2024

conditional value at risk,” IEEE Trans. Commun, vol. 70, no. 12, pp. [22] Z. H. Ding, W. R. Tan, W. B. Lu, W. J. Lee, “Quality-of-Service Aware
8060-8074, December 2022. Battery Swapping Navigation and Pricing for Autonomous Mobility-on-
[13] S. Rani, M. Balasaraswathi, P. C. S. Reddy, G. S. Brar, M. Sivaram, V. Demand System,” IEEE Trans. Ind. Informatics, vol. 18, no. 11, pp. 8247-
Dhasarathan, “A hybrid approach for the optimization of quality of 8257, November 2022.
service metrics of WSN,” Wirel. Networks, vol. 26, no. 1, pp. 621-638, [23] S. B. Akintoye, A. Bagula, “Improving Quality-of-Service in Cloud/Fog
January 2020. Computing through Efficient Resource Allocation,” Sensors, vol. 19, no.
[14] M. Can, M. C. Ilter, I. Altunbas, “Data-Oriented Downlink RSMA 6, pp. 1267, June 2019.
Systems,” IEEE Commun. Lett, vol. 27, no. 10, pp. 2812-2816, Oct. 2023. [24] Y. F. Liu, Y. Wang, R. J. Sun, Z. Y. Miao, “Hierarchical power allocation
[15] Z. W. Li, J. L. Zhang, “Data-oriented distributed overall optimization for algorithm for D2D-based cellular networks with heterogeneous statistical
large-scale HVAC systems with dynamic supply capability and quality-of-service constraints,” IET Commun, vol. 12, no. 5, pp. 518-526,
distributed demand response,” Build. Environ, vol. 221, pp. 109322, May 2018.
November 2022. [25] M. Gorawski, K. Pasterak, A. Gorawska, M. Gorawski, “The stream data
[16] J. Y. Chen, C. H. S. Liao, Y. Wang, L. Jin, X. Y. Lu, X. L. Xie, R. Yao, warehouse: page replacement algorithms and quality of service metrics,”
“AQMDRL: automatic quality of service architecture based on multistep Future Gener. Comput. Syst, vol. 142, pp. 212-227, July 2023.
deep reinforcement learning in software-defined networking,” Sensors, [26] S. Slimani, T. Hamrouni, F. Ben Charrada, “Service-oriented replication
vol. 23, no. 1, pp. 429, January 2023. strategies for improving quality-of-service in cloud computing: a survey,”
[17] N. Kimbugwe, T. R. Pei, M. N. Kyebambe, “Application of deep learning Cluster Comput, vol. 24, no. 1, pp. 361-392, January 2021.
for quality of service enhancement in internet of things: a review,” [27] D. Peña, A. Tchernykh, S. Nesmachnow, R. Massobrio, A. Feoktistov, I.
Energies, vol. 14, no. 19, pp. 6384, September 2021. Bychkov, et al, “Operating cost and quality of service optimization for
[18] T. Huang, Y. Z. Li, “Quality of Service (QoS)-based hybrid optimization multi-vehicle-type timetabling for urban bus systems,” J. Parallel Distrib.
algorithm for routing mechanism of wireless mesh network,” Sensors Comput, vol. 133, pp. 272-285, January 2019.
Mater, vol. 33, no. 8, pp. 2565-2576, August 2021. [28] A. Caliciotti, L. R. Celsi, “On optimal buffer allocation for guaranteeing
[19] W. Q. Yang, Y. H. Shi, Y. Gao, L. Wang, M. Yang, “Incomplete-Data quality of service in multimedia internet broadcasting for mobile
oriented multiview dimension reduction via sparse low-rank networks,” Int. J. Control Autom. Syst, vol. 18, no. 12, pp. 3043-3050,
representation,” IEEE Trans. Neural Networks Learn. Syst, vol. 29, no. December 2020.
12, pp. 6276-6291, December 2018. [29] C. Q. Li, Y. Liu, Y. Zhang, M. Y. Xu, J. Xiao, J. Zhou, “A novel nature-
[20] A. M. Khasawneh, M. A. Helou, A. Khatri, G. Aggarwal, O. Kaiwartya, inspired routing scheme for improving routing quality of service in power
M. Altalhi, et al, “Service-Centric heterogeneous vehicular network grid monitoring systems,” IEEE Syst. J, vol. 17, no. 2, pp. 2616-2627,
modeling for connected traffic environments,” Sensors, vol. 22, no. 3, pp. March 2023.
1247, March 2022. [30] M. S. Khatib, M. Atique, “FGSA for optimal quality of service based
[21] R. A. Osman, X. H. Peng, M. A. Omar, Q. Gao, “Quality of service transaction in real-time database systems under different workload
optimisation of device-to-device communications underlaying cellular condition,” Cluster Comput, vol. 23, no. 1, pp. 307-319, January 2020.
networks,” IET Commun, vol. 15, no. 2, pp. 179-190, February 2021.

908 | P a g e
www.ijacsa.thesai.org

You might also like