0% found this document useful (0 votes)
4 views

From Code to Conundrum Machine Learnings Role in Modern Malware Detection

This research paper explores the integration of machine learning algorithms, specifically Random Forests, K-Nearest Neighbours (KNN), and Logistic Regression, for enhancing malware detection and analysis. Utilizing a dataset of 60,000 benign and malware instances, the study evaluates the performance of these algorithms in terms of accuracy, precision, recall, and F1-score, aiming to provide practical insights for cybersecurity professionals. The findings indicate high accuracy rates, with KNN achieving 99% and Random Forests reaching 99.98%, showcasing the potential of machine learning in combating evolving malware threats.

Uploaded by

sleepy.bear.iv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

From Code to Conundrum Machine Learnings Role in Modern Malware Detection

This research paper explores the integration of machine learning algorithms, specifically Random Forests, K-Nearest Neighbours (KNN), and Logistic Regression, for enhancing malware detection and analysis. Utilizing a dataset of 60,000 benign and malware instances, the study evaluates the performance of these algorithms in terms of accuracy, precision, recall, and F1-score, aiming to provide practical insights for cybersecurity professionals. The findings indicate high accuracy rates, with KNN achieving 99% and Random Forests reaching 99.98%, showcasing the potential of machine learning in combating evolving malware threats.

Uploaded by

sleepy.bear.iv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

From Code to Conundrum: Machine Learning's

Role in Modern Malware Detection


2024 International Conference on Advancements in Smart, Secure and Intelligent Computing (ASSIC) | 979-8-3503-7018-8/24/$31.00 ©2024 IEEE | DOI: 10.1109/ASSIC60049.2024.10507988

Vaibhavi Jha Akshat Saxena


School of Computer Science & Engineering School of Computer Science & Engineering
Vellore Institute of Technology Vellore Institute of Technology
Vellore, India Vellore, India
[email protected] [email protected]

Abstract–In this research paper, we’ll be diving into the combination cloud products, 5G networks, and the Internet of Things, proactive
of malware analysis and machine learning to step up security. Right protection is more important than ever.
now, our digital world is like an anthill: connected in every possible The future will only benefit our usage of current technologies in
way. In this study, we used machine learning algorithms like
cybersecurity, such as deep learning and reinforcement learning.
Random Forests, K-Nearest Neighbours (KNN), and Logistic
Regression for anomaly detection. Inside 60,000 benign and malware These new enhancements also offer up new avenues for malware
instances lie the answers we are looking for. We judge these three dissemination. This research study looks into the fascinating world
algorithms on how well they can detect accuracy, precision, recall, of malware analysis through the use of anomaly detection
and F1- score. We hope to give professionals practical insights to methodologies based on device-mastering.
secure their systems from malware while constantly being aware of
new threats. In a day where security is crucial, this work will We have a look at the performance of three unbiased algorithms:
connect past issues with machine learning today so that we can better Random Forests, K-Nearest Neighbours (KNN), and Logistic
prepare for problems tomorrow. Regression, the use of the adaptable system studying framework
sci-package-examine.
Keywords—Malware Analysis, KNN, Random Forests, Logistic
By leveraging a large dataset comprising 60,000 strains of each
Regression, Sci-kit-learn, Trends in Malware, Malware Detection,
Security threats, etc. benign and malware statistics, our look at ambitions to shed light
on the effectiveness of these algorithms in distinguishing the subtle
nuances between legitimate software program and their malicious
I. INTRODUCTION counterparts.
In today's digitally linked world, the widespread threat of malware This intro is a quick review of the significance of our examine. We
looms large over our virtual existence. The persistence of harmful additionally move over why we had been motivated to do it, the
software, sometimes known as malware, poses a significant threat targets, and how we structured this newsletter. On our way to
to the security of our networks, structures, and confidential data. analysing malware and system getting to know, we want to help
This pressing demand for innovative solutions necessitates a on line protection professionals and researchers with resources
paradigm shift in cybersecurity practises, encouraging the they want. Also keeping a watch out for brand spanking new
incorporation of device learning into malware analysis. threats and tech traits that could mess with cybersecurity.
Concurrently, the field of machine mastery has seen a high-quality
evolution. II. RESEARCH PROBLEM
Machine learning tactics based on artificial intelligence have The challenge of malware analysis is employing anomaly
progressed from abstract notions to real-world applications, and detection based on machine learning is critical in the ever-
they have the potential to drastically alter cybersecurity. changing world of cybersecurity. The objective is to create
machine learning models capable of detecting irregularities in
The ability of machine learning algorithms to discover complex software or system component activity that may signal the
patterns from massive datasets has opened up new avenues for existence of malware. It requires developing algorithms that are
detecting malware variants that may have escaped traditional robust, scalable, and adaptable enough to distinguish between
defences. benign and malicious activity. Solving this research topic is crucial
This convergence of cybersecurity and gadget learning is a for strengthening the resilience of computer systems and networks
watershed moment in our continuing battle against malware. against the ongoing danger posed by developing malware,
Defenders and attackers continue to evolve in the face of a therefore protecting critical digital assets and preserving the
developing generation. Machine learning, although promising, security of individuals and organizations in the digital era.
isn't a panacea, and it, too, faces difficult scenarios in the form of Malware is computer software that is expressly designed to
opposing attacks aimed at misleading the very algorithms infiltrate, damage, or compromise computer systems, networks, or
supposed to protect us. Because of the evolving attack surface of devices. This comprises viruses, worms, Trojans, ransomware,
spyware, and adware, all of which are intended to cause harm,
ranging from data theft to system interruption.

1
Authorized licensed use limited to: University of Skovde. Downloaded on February 02,2025 at 22:36:10 UTC from IEEE Xplore. Restrictions apply.
Malware analysis is researching and comprehending these entities processes. For our benefit, “pslist.nproc” records the count of
to determine their features and dangers. It develops detection and active processes and shows how much multitasking was happening
mitigation tactics for malware assaults using techniques such as while we collected data. When it comes to individual processes,
static analysis (code examination), dynamic analysis (behaviour “dlllist.avg_dlls_per_proc” reveals the average count of Dynamic
observation), and reverse engineering (logic revealing). Link Libraries (DLLs) that were loaded. This helps us understand
shared library utilization. We also see that “handles.nhandles”
III. LITERATURE REVIEW counts all the handles being used by files or objects actively
The landscape of cybersecurity is perpetually in flux, driven by the employed. These are things like system resources. Lastly,
relentless evolution of malicious software, commonly referred to “svcscan.process_services” tallies critical background services that
as malware. With attackers employing increasingly sophisticated are essential for system health, security assessment, and anomaly
techniques, the traditional arsenal of malware detection struggles detection. All of these metrics work together to give us a better
to keep pace. Machine learning, as a dynamic field in artificial understanding of system behaviour and security posture.
intelligence, has emerged as a potent ally in the battle against
A. K-Nearest Neighbours (KNN)
malware. This literature review delves into previous research
endeavours that have harnessed the prowess of machine learning In the battle to develop reliable virus-detecting software, K-
algorithms, specifically K-Nearest Neighbours (KNN), Random Nearest Neighbours (KNN) is now emerging as a serious
Forests, Logistic Regression, and the versatile sci-kit-learn library, competitor. It works impressively with huge datasets and patterns.
to analyse and detect malware. These studies have consistently Vaibhavi used the KNN model program as the first base to achieve
yielded remarkable accuracy rates, often soaring between 97% and the target. And with a 99% accuracy rate, the study's classification
99%. Such outcomes underscore the potential of these algorithms of malware and benign samples using KNN produced astounding
to fortify cybersecurity defences. results. In proximity-based classification, which is the foundation
of KNN, an unknown sample is categorised according to the
For reference, included here are the initial and concluding 10 majority class of its k-nearest neighbours. After that, it
sample data entries, providing a detailed depiction of the dataset's incorporates the data into a high-dimensional feature space. This
nature and content, which forms the basis of this research makes sense when attempting to evaluate whether something is
endeavour. malware or not while being challenging to explain. Finding all the
TABLE I. (a) Initial 10 sample data entries little nuances that signature-based detection systems overlook is
the most difficult aspect.
Category pslist.nproc dlllist.avg handles.nhandles svcscan.process
0 Benign 45 38.500000 9129 24 B. Random Forests:
1 Benign 47 44.127660 11385 24 An ensemble learning technique called Random Forest has
2 Benign 40 48.300000 11529 27
gained a lot of attention for its ability to solve difficult
3 Benign 32 45.156250 8457 27
4 Benign 42 49.214289 11816 24
classification problems. It’s able to achieve this by building many
5 Benign 40 52.050000 12278 27 decision trees and then combining the results. Even in malware
6 Benign 43 50.441860 13116 27 analysis, people have started using it. For example, Akshat was
7 Benign 42 49.214286 11819 24 able to achieve a 99.98% accuracy rate with random forests. In a
8 Benign 42 49.214289 11813 24 world where malware is always changing and trying to outsmart
9 Benign 40 52.050000 12320 27 our detection systems, random forest steps up to the plate. By
combining the answers from different trees, it’s able to spot small
(b) Concluding 10 sample data entries variations in malware strains with ease. On top of that, it can
Category pslist.nproc dlllist.avg handles.nhandles svcscan. compare past outcomes and justify why it thinks what it thinks —
58586 Ransomware 38 38.710526 8080 24
an attribute that is useful when dealing with cybersecurity.
58587 Ransomware 39 39.000000 8213 24 C. Logistic Regression:
58588 Ransomware 37 39.108108 7982 24
58589 Ransomware 46 36.956522 9225 24 Logistic Regression is one of the fundamental classification
58590 Ransomware 37 39.054054 7964 24 methods in the field of system learning. Although it may not be as
58591 Ransomware 37 39.270270 7973 24 complicated as some of its competitors, its simplicity and
58592 Ransomware 37 36.405405 7038 24 interpretability make it a valuable tool for distinguishing malicious
58593 Ransomware 38 38.105263 7982 24
58594 Ransomware 37 39.243243 7974 24
software from safe software. Vaibhavi studied the use of logistic
58595 Ransomware 38 39.131579 8095 24 regression in malware detection and reported a 99.53 % accuracy
rate. The linear model used by logistic regression captures the
linear correlations between input capabilities and the binary target
Our ten sample data entries are the first step in the initiation of a variable introduced during model programming. It successfully
machine learning model training process. These entries aim to simulates the decision boundary that separates harmful from non-
create a model that can both describe and understand dataset malicious samples in the context of malware evaluation. Because
entries. of Logistic Regression’s transparency, professionals prefer it.

The “Category” column is uniformed with the label “Benign.”


Simply meaning it just has non-malicious entities or pure
2
Authorized licensed use limited to: University of Skovde. Downloaded on February 02,2025 at 22:36:10 UTC from IEEE Xplore. Restrictions apply.
studies were used as a foundation to develop an entirely new
D. Sci-kit-learn Library:
approach.
Researchers and practitioners have found the Python library,
sci- kit-learn, a versatile and essential toolkit in the field of In our study, we were able to effectively distinguish malware from
cybersecurity. They use it as an aid to do tasks like model selection, benign samples. These approaches gave us access to a wide range
evaluation, data pre-processing, and feature engineering. of parameters that take part in categorising a machine's behaviour
in different environments whether it is malicious or pure [13-17].
TABLE II. Sci-kit-learn model results Throughout the course of this study, three autonomous systems
were implemented in every program run: Logistic Regression, K-
1 Actual Predicted Nearest Neighbours (KNN), and Random Forest. We started off by
2 1 1 splitting the database into training and test sets. The code was
3 1 1
written for the training set. The effectiveness of these algorithms
was checked through an accuracy performance indicator. The
4 0 0 number of neighbours went anywhere from 1 all the way up to 20
5 1 1 for the KNN model. A graphing program was used to acquire a
6 0 0
more detailed view. And like all our other tests, we carefully assess
each algorithm and set of parameters to ensure reliability and
7 1 1 accurate categorization. We have made graphs and bar charts for
8 1 1 better understanding. It gives a clear route map of our processes.
9 0 0
10 1 1
11 1 1
12 1 1
13 1 1
14 1 1
15 1 1

By leveraging sci-kit-learn’s capabilities, Akshat was able to unify


KNN along with random forests and logistic regression into one
system. This system is able to assign an index between 1 and 0,
and 1 is assigned to malware while a 0 is assigned to non-
threatening behaviour. The accuracy rate at which this model
performed was 99%, nothing short of impressive.
The main benefit of sci-kit-learn is that it provides a wide range of
tools for any step in the machine-learning process. It makes it
easier for researchers to harness all the potential they can from
various machine learning algorithms. Figure 1. Proposed ML malware detection method.

To help enhance malware analysis and detection — K-Nearest


The research methodology can be summarised in the following
Neighbours (KNN), Random Forests, and Logistic Regression
steps:
along the sci-kit-learn library are being used together. It’s worth
noting that these algorithms are most effective when used on large A. Dataset Acquisition and Pre-processing
databases. To begin our analysis, we were required to find a modest-
IV. METHODOLOGY sized dataset that comprised both good and bad samples, or benign
along with malicious data. This dataset served as the base of all
We developed 4 machine-learning models to analyse malware our research. After getting the data, we followed a rigorous data-
samples. We used the K-Nearest Neighbours (KNN), Random cleaning process. We described the data using different syntaxes.
Forests, and Logistic Regression algorithms and took advantage of At this point, we are in the independence of refining and choosing
the sci-kit-learn toolkit to compare them. While doing this, a what features to use, and how to normalise them. This is one of the
methodical process was followed and findings from previous most important steps since it prepares the data for future analysis.

3
Authorized licensed use limited to: University of Skovde. Downloaded on February 02,2025 at 22:36:10 UTC from IEEE Xplore. Restrictions apply.
B. Algorithm Selection and Configuration:
Chosen machine learning algorithms: K-Nearest Neighbours #loop though different neighbours
for i in neighbours:
(KNN), Random Forests, and Logistic Regression. Each one of
knn.set_params(n_neighbors=i)
these algorithms has shown promise in previous studies conducted
by researchers and data scientists. After we pick them out, we go
#fit the algorithm
through each one carefully, making sure they’re fine-tuned and
knn.fit(X_train, y_train)
optimised to get as close to perfection as possible. Every single
parameter gets adjusted so that there’s no more room for
TABLE III. Test Score (a) and Train Score (b)
improvement in the customised dataset that will be finally used to
run the algorithms. test_scores
C. Feature Engineering: 0.9994027303754266 train_scores
0.9989761092150171 1.0
In this technique, we extract all the relevant information
from the dataset, also called data extraction. This is a very useful 0.999061433447099 0.9997440054612168
and important technique in data science since it prepares the 0.9987201365187713 0.9997440054612168
perfect dataset for running heavy and complex algorithms. By 0.9986348122866894
0.9994026794095059
data extraction, we identify and extract meaningful features. We 0.9993600136530421
0.9982081911262799
remove unnecessary parameters to obtain useful samples. 0.9991253519924909
0.9982081911262799
Basically, grooming the dataset for useful purposes. 0.9990186876013312
0.9980375426621161
D. Model Training and Evaluation: 0.9988053588190119
0.9979522184300341
After that, we split up the dataset into two smaller sets. We 0.9987200273060841
0.9977815699658703
providethe code for this step in the table below. 0.9986560286713884
0.9977815699658703
0.9985066985237648
• The chosen machine learning algorithms go through 0.9977815699658703 0.9984000341326051
training on the set intended for them. 0.9977815699658703 0.9983573683761413
• The machine trains itself on the lines of the program. It 0.9976962457337883 0.9982720368632135
learnshow to predict patterns underneath the data. 0.9978668941979523 0.9982720368632135
To wrap it all up, we evaluate all the models using a wide range of 0.9974402730375427 0.9981013738373581
performance metrics like accuracy, precision, recall, and F1-score. 0.9973549488054607 0.9981013738373581
This will give us an overall understanding of how effective they 0.9973549488054607 0.9981013738373581
are. 0.9973549488054607 0.9981013738373581
The k-nearest neighbors (KNN) method, a machine learning 0.9973549488054607 0.9980160423244304
technique, is used in this Python script. It seeks to determine the (a) (b)
impact of various "k" variables on a model's performance. This
will provide the number of its closest neighbours that it genuinely
takes into account. On top of that, the script also tracks the two E. Comparative Analysis:
lists’ accuracy in categorizing "test-scores” and “train-scores”. It Our study's focus area is comparing the algorithms we
does this by looping over the KNN classifier with a range of "k" picked. We wanted to know how they perform when it comes to
values, fitting them to training data, and estimating accuracy for detecting malware. You can look at our code below; it shows how
both sets. With this study, it should be easier for you to find a well they performed in training and testing for our machine-
balance between variance and bias when choosing an ideal ‘k’ learning models. It also gives insight into what their positives and
value. And it’s finished! A productive KNN model customized for negatives are so we can get a better understanding of their actual
your unique dataset. function.

Algorithm 1 Model Training Algorithm 2 Combined Algorithm Analysis


train_scores = []
test_scores = [] #Putting Models into a Dictionary

#create a ist of different values for kneighbors models = {"Logistic Regression" : LogisticRegression(),
neighbours = range (1,21) "KNN" : KNeighborsClassifier(),
"Random Forest" : RandomForestClassifier()}
#Set KNN instances
knn = KNeighborsClassifier() def fit_and_score(models, X_train, X_test, y_train, y_test):

4
Authorized licensed use limited to: University of Skovde. Downloaded on February 02,2025 at 22:36:10 UTC from IEEE Xplore. Restrictions apply.
Regression had an outstanding 99.53% accuracy, while KNN hit a
#setup random seed shocking 99.86%. The Random Forests algorithm was perfect and
np.random.seed(42) nailed a 100% accuracy rating.
#make a dictionary to keep the model score The fact that our models could distinguish between dangerous
model_scores = {} software and safe software is really good. We’ve proven the use of
#Loop through models machine-learning methods to be promising for malware detection.
for name, model in models.items(): Cybersecurity gets a bit more optimism when you throw sci-kit-
#Fit the model to the data learn’s machine learning techniques into the mix.
model.fit(X_train, y_train)
#Evaluate the model and aped its score to
model_scores
model_scores[name] = model.score(X_test, y_test)
return model_scores

Algorithm 3 Model fit and performance score assessment

model_scores = fit_and_score(models = models, X_train =


X_train, X_test = X_test, y_train = y_train, y_test = y_test)

#model_scores = fit_and_score(models, X_train, X_test, y_train,


y_test)

model_scores

Output

n_iter_i = _check_optimize_result(

{'Logistic Regression': 0.9953071672354948,


'KNN': 0.9986348122866894,
'Random Forest': 1.0}
Figure 2. Accuracy result graph

F. Interpretation and Insights:


Beyond performance, we examine interpretability in depth.
To fully appreciate the reasoning behind these models'
classifications, we examine how they arrive at their conclusions.
Finding traits and patterns that affect why algorithms make the
judgements they do requires interpretability, which is crucial. In
line with previous goals outlined in this study, we hope that our
methodology will validate if K-Nearest Neighbours (KNN),
Random Forests, or Logistic Regression from sci-kit-learn are
effective when it comes to analysing malware. This way, we can
get a better idea of if cybersecurity can be enhanced through
machine learning.
RESULTS
Figure 3. Result graph
When we put our machine-learning models for malware detection
to the test, they were extremely impressive. K-Nearest Neighbours However, we must remain vigilant against the ever-changing threat
(KNN), Random Forests, and Logistic Regression are all used in posed by malicious software. If we want to keep our success in
our models and showed very high accuracy rates. Logistic staying ahead of our digital enemies, we need to keep refining our

5
Authorized licensed use limited to: University of Skovde. Downloaded on February 02,2025 at 22:36:10 UTC from IEEE Xplore. Restrictions apply.
models. The way machine learning is, day by day new threats are [4] Dragoş Gavriluţ, Mihai Cimpoesu, D. Anton, D. Anton.
being created. If we want to stay ahead, we must balance caution Malware detection using machine learning. Conference Paper
and innovation. · November 2009.
[5] Sethi, K.; Kumar, R.; Sethi, L.; Bera, P.; Patra, P.K. A novel machine
CONCLUSION learning based malware detection and classification framework. In
We’ve come a long way in finding ways to protect ourselves from Proceedings of the 2019 International Conference on Cyber Security and
Protection of Digital Services (Cyber Security), Oxford, UK, 3–4 June 2019.
cyber threats. In the process, we found some incredibly powerful
algorithms that let us tell friends from foes with digital software. [6] Feng, T.; Akhtar, M.S.; Zhang, J. The future of artificial intelligence in
cybersecurity: A comprehensive survey. EAI Endorsed Trans.Create. Tech.
And it’s all thanks to the scikit-learn library. 2021, 8, 170285.
K-Nearest Neighbours (KNN), Logistic Regression, and Random [7] Sharma, S.; Krishna, C.R.; Sahay, S.K. Detection of advanced malware by
Forests have given us fascinating insights into malware analysis. machine learning techniques. In Proceedings of the SoCTA 2017, Jhansi,
But not only did they do that, but they also provided us with India, 22–24 December 2017.
promising results. The greatest score a model can reach is 100%, [8] Chandrakala, D.; Sait, A.; Kiruthika, J.; Nivetha, R. Detection and
classification of malware. In Proceedings of the 2021 International
and you’d think when you find KNN’s score of 99.86% it would
Conference on Advancements in Electrical, Electronics, Communication,
be done there, right? Wrong! Random Forests hit 100%. A perfect Computing and Automation (ICAECA), Coimbatore, India, 8–9 October
score. But even with those two impressive scores, what blew us 2021.
away was Logistic Regression’s 99.53% accuracy at telling good [9] Gibert, D.; Mateu, C.; Planes, J.; Vicens, R. Using convolutional neural
from bad. What we ultimately learned is that machine learning networks for classification of malware represented as images. J. Comput.
algorithms work great at keeping our data safe. Virol. Hacking Tech. 2019.
[10] Dahl, G.E.; Stokes, J.W.; Deng, L.; Yu, D.; Research, M. Large-scale
In this era where threats are everywhere, it is important to use any Malware Classification Using Random Projections And Neural Networks.
advantage we can get. We will continue to study these models and In Proceedings of the International Conference on Acoustics, Speech and
push for more accurate results. Right now, all we want to do is Signal Processing-1988, Vancouver, BC, Canada, 26–31 May 2013.
make our digital world a safer place for everyone. [11] Akhtar, M.S.; Feng, T. Deep learning-based framework for the detection of
cyberattack using feature engineering. Secure. Commun. Netw. 2021, 2021,
ACKNOWLEDGEMENT 6129210.
We would like to express our gratitude to our mentors Mr. Bikas [12] Pavithra, J.; Josephin, F.J.S. Analyzing various machine learning algorithms
for the classification of malware. IOP Conf. Ser. Mater. Sci. Eng. 2020, 993,
Jha and Mr. Shubh Mittal, for their guidance in our pursuit of 012099.
knowledge. We would also like to acknowledge the Canadian
Institute for Cybersecurity for providing us with datasets (source [13] Tharwat, A., Gaber, T., Fouad, M.M., Snásẽ l, V., & Hassanien, A.E. (2015).
Towards an Automated Zebrafish-based Toxicity Test Model Using
of the dataset) on their website. Specifically, we are grateful for Machine Learning. Procedia Computer Science, 65, 643-651.
the obfuscated malware dataset, which accurately reflects real- [14] Gaber, T., El Jazouli, Y., Eldesouky, E., & Ali, A. (2021). Autonomous
world malware situations. Additionally, we would like to thank our Haulage Systems in the Mining Industry: Cybersecurity, Communication
friends and family for their support as it has been a driving force and Safety Issues and Challenges. Electronics.
behind our determination. It is through the efforts of all involved [15] G. I. Sayed, M. A. Ali, T. Gaber, A. E. Hassanien and V. Snasel, "A hybrid
that this research paper has come to fruition. A testament to the segmentation approach based on Neutrosophic sets and modified watershed:
power of collaboration, perseverance and an unending curiosity A case of abdominal CT Liver parenchyma," 2015 11th International
Computer Engineering Conference (ICENCO), Cairo, 2015, pp. 144-149,
about the world, around us. doi: 10.1109/ICENCO.2015.7416339.
REFERENCE [16] Applebaum, S., Gaber, T., & Ahmed, A. (2021). Signature-based and
Machine-Learning-based Web Application Firewalls: A Short
[1] Nikam, U.V.; Deshmuh, V.M. Performance evaluation of machine learning Survey. International Conference on Arabic Computational Linguistics.
classifiers in malware detection. In Proceedings of the 2022 IEEE
International Conference on Distributed Computing and Electrical Circuits [17] Tahoun, M., Almazroi, A.A., Alqarni, M.A., Gaber, T., Mahmoud, E.E., &
Eltoukhy, M.M. (2020). A Grey Wolf-Based Method for Mammographic
and Electronics(ICDCECE), Ballari, India, 23–24 April 2022.
Mass Classification. Applied Sciences.
[2] Akhtar, M.S.; Feng, T. IOTA-based anomaly detection machine learning in
mobile sensing. EAI Endorsed Trans. Create. Tech. 2022,
9, 172814.
[3] Muhammad Shoaib Akhtar and Tao Feng. Malware Analysis and Detection
Using Machine Learning Algorithms. School of Computer and
Communication, Lanzhou University of Technology, Lanzhou 730050,
China.

6
Authorized licensed use limited to: University of Skovde. Downloaded on February 02,2025 at 22:36:10 UTC from IEEE Xplore. Restrictions apply.

You might also like