0% found this document useful (0 votes)
8 views

Conference-05 2024 IEEE an Enhanced Weapon Detection System Using Deep Learning

This document presents a novel deep learning model for weapon detection, specifically identifying seven types of weapons using the VGGNet architecture implemented in Keras on TensorFlow. The model achieved a classification accuracy of 98.40%, surpassing established models like VGG-16, ResNet-50, and ResNet-101. The study emphasizes the importance of automated systems in enhancing security measures against criminal activities.

Uploaded by

xxgnpbgj
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Conference-05 2024 IEEE an Enhanced Weapon Detection System Using Deep Learning

This document presents a novel deep learning model for weapon detection, specifically identifying seven types of weapons using the VGGNet architecture implemented in Keras on TensorFlow. The model achieved a classification accuracy of 98.40%, surpassing established models like VGG-16, ResNet-50, and ResNet-101. The study emphasizes the importance of automated systems in enhancing security measures against criminal activities.

Uploaded by

xxgnpbgj
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/380937723

An Enhanced Weapon Detection System using Deep Learning

Conference Paper · May 2024


DOI: 10.1109/ICNWC60771.2024.10537568

CITATIONS READS
0 619

4 authors, including:

Sivakumar Murugaiyan Amruth Gatta


SRM Institute of Science and Technology SRM Institute of Science and Technology
17 PUBLICATIONS 150 CITATIONS 1 PUBLICATION 0 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Sivakumar Murugaiyan on 29 May 2024.

The user has requested enhancement of the downloaded file.


2024 2nd International Conference on Networking and Communications (ICNWC)

An Enhanced Weapon Detection System using


Deep Learning
2024 2nd International Conference on Networking and Communications (ICNWC) | 979-8-3503-6526-9/24/$31.00 ©2024 IEEE | DOI: 10.1109/ICNWC60771.2024.10537568

M. Sivakumar Marla Sai Ruthwik GattaVenkata Amruth


Department of Networking and Department of Networking and Department of Networking and
Communications, Faculty of Engineering Communications, Faculty of Engineering Communications, Faculty of Engineering
and Technology and Technology and Technology
SRM Institute of Science and Technology SRM Institute of Science and Technology SRM Institute of Science and Technology
Kattankulathur,TamilNadu - 603203,India Kattankulathur,TamilNadu - 603203,India Kattankulathur,TamilNadu - 603203,India
[email protected] [email protected] [email protected]

Kiranmai Bellam
Department of CEE,
Prairie View A&M University,
Texas, TX77446, U.S.A
[email protected]

Abstract: Considering a growing number of criminal acts, security guards arriving at the scene, examining recorded
there is an urgent need to introduce computerized command images, and collecting essential evidence [4].
systems in security forces. This study presents a novel deep Consequently, the need for proactive systems at crime
learning model specifically developed for identifying seven scenes has been emphasized [3,5,6]. This research
different categories of weapons. The suggested model utilizes
the VGGNet architecture and is implemented utilizing the
advocates the creation of a system that makes use of
Keras architecture, which is built on top of the TensorFlow software to quickly notify security staff when it detects
framework. The model is trained to recognize several types of dangerous things, enabling immediate action to avert
weapons, including assault rifles, bazookas, grenades, potential crimes [4,7]. Hence, it has to be essential to
hunting rifles, knives, handguns, and revolvers. The training develop a system that has the ability to learn and identify
procedure involves creating layers, executing processes, potentially dangerous things [8].
saving training data, determining success rates, and testing The profound importance of deep learning in
the model. A customized dataset, consisting of seven different enhancing the execution of tasks inside protection control
weapon categories, has been meticulously chosen and systems is generally recognized [3]. Deep learning is a
organized to support the training of the proposed model
network. We do a comparative study using the newly created
specific area within machine learning that utilizes multiple
dataset, specifically comparing it with established models layers of non-linear processing units to gather and modify
such as VGG-16, ResNet-50, and ResNet-101. The suggested features [9,10]. Deep learning focuses on extracting
model exhibits exceptional classification accuracy, obtaining representations from primary data through integrating the
a remarkable 98.40%, outperforming the VGG-16 model understanding of a number of feature levels of data [11].
(89.75% accuracy), ResNet-50 model (93.70% accuracy), and An image representation entails quantifying the density
ResNet-101 model (83.33% accuracy). This research provides metrics for each pixel and identifying features such as
a vital viewpoint on the effectiveness of the suggested deep groups of edges and distinctive forms [12]. The
learning model in dealing with the complex problem of Convolutional Neural Network, also known as the is the
weapon classification, presenting encouraging outcomes that
could greatly improve the capabilities of security forces in
core architecture of deep learning. It consists of several
countering criminal activities. layers, including the pooling, convolution, and activation
operation, dropout, entirely interconnected, and layer for
classification [13].
Keywords: Deep learning, armed weapon detection, machine
learning, object detection, convolutional neural networks
In contemporary society, illicit activities
predominantly center around portable firearms [16].
Research consistently highlights the significant role that
I. INTRODUCTION handheld guns play in various illicit activities such as theft,
Surveillance cameras have become prevalent in the illegal hunting, and terrorism [17]. One proposed approach
modern era of improved science and technology as an to reduce criminal activities is the establishment of
essential instrument for preventing crime [1]. Security surveillance systems or control cameras. This allows
professionals must diligently oversee the installation of security units to take proactive steps at an early stage
many camera systems in various locations [2,3]. [18,19,20]. Nevertheless, the task of identifying weapons
Traditionally, post-incident analysis involves poses distinctive difficulties, such as self-occlusion, object
979-8-3503-6526-9/24/$31.00 ©2024 IEEE
Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on May 29,2024 at 06:21:30 UTC from IEEE Xplore. Restrictions apply.
2024 2nd International Conference on Networking and Communications (ICNWC)

resemblances, and complexities in the backdrop [15,16]. versatile approach showcased enhanced accuracy in
Self-occlusion refers to the circumstance where a portion recognizing objects and events in video footage.
of the weapon is blocked or hidden, while object A study on hybrid weapon detection employed fuzzy
similarities exist when non-weapon objects, like hands or logic to create a system that can identify dangerous items,
clothing, bear resemblance to weapons. Background such as firearms and blades, by incorporating extra
addresses refer to the difficulties related to the variables. This not only strengthened the accuracy of the
surroundings in which a weapon is placed [16]. results, but also substantially decreased the occurrence of
This article introduces a novel deep learning model false alarms[25]. Two new techniques were introduced in
specifically developed for the purpose of accurately a study that exploited deep Convolutional Neural
detecting and classifying seven specific types of weapons, Networks (CNNs) to classify weapons [26]. The study
namely assault rifles, bazookas, grenades, hunting rifles, examined the impact of adjusting the neuron count in the
knives, pistols, and revolvers. The classification layer that is completely linked by utilizing the weights
performance of the proposed model is thoroughly assessed obtained from a trained beforehand VGG-16 model. This
using well-established benchmarks, such as the Visual investigation provided useful knowledge regarding
Geometry Group (VGG-16) model [21], Residual Network classification systems. A research that specifically targeted
(ResNet50), and ResNet101 models [22]. Comparative the identification of guns in surveillance videos [5]
investigation reveals that the suggested model exhibits stationed its analysis on regions where human presence
higher accuracy and reduced loss rates compared to the was detected. The implementation of a weapon detection
VGG-16, ResNet50, and ResNet101 models. The system that utilizes distinct components of weapons
subsequent sections of this work are organized in the demonstrates a focused approach to improving detection
following manner: Section 2 conducts a comprehensive efficiency [27].
analysis of the relevant literature. Section 3 elucidates the An independent inquiry[28] examined the
materials and methods employed in the study. Section 4 implementation of various tiers of defense for Internet of
showcases the results of the evaluation of the classification Things (IoT) platforms. The suggested system consistently
performance. Section 5 provides an in-depth discussion. assessed multidimensional events and determined
Lastly, Section 6 concludes the study by summarizing protection levels, fulfilling the requirement for a
significant findings and indicating potential avenues for comprehensive approach to security management in
future research. dynamic situations. The study conducted real-time object
detection, focusing on movable weapons such as pistols
II. LITERATURESURVEY and rifles [20]. The research successfully recognized and
The emergence of automatic handgun detection systems categorized ammunition in photographs by using
for surveillance and security applications have drawn TensorFlow-based versions of Overfeat, a convolutional
considerable attention in recent years, leading to the neural network (CNN) based visual extractor and data
exploration of novel techniques and methodology. This generator. A research investigation on the automated
literature assessment consolidates and evaluates numerous identification of firearms and swords presented algorithms
research efforts, each making a distinct contribution to the to alert human operators when these items were recognized
developing field of weapon detection systems. A major in closed-circuit telecommunications systems[29]. The
study, as stated in [18], focused on generating crucial project aimed to prioritize practicality by minimizing false
training data for an automatic handgun detection system alarms and creating a system that can promptly issue
using deep Convolutional Neural Network (CNN) notifications in hazardous situations.
classification. The study thoroughly explored the Clustering algorithms and color-based
effectiveness of several categorization models, with a segmentation were used in the field of visual weapon
particular focus on the importance of reducing false registration to exclude unimportant objects from
positives. After analyzing two approaches to classification, photographs. The Harris detector and rapid retina keypoint
one using the sliding window technique and the other using descriptor were critical in detecting pertinent objects,
the geographic proposal approach, it was found that the dealing with challenges such as partial occlusion, scaling,
region-based CNN model, which operates quickly, rotation, and the existence of several weapons[30]. A study
produced the most promising results [18]. was conducted to identify individuals who are at risk,
The study [23] focused on determining the imbalance namely those who possess handheld weapons. This study
map and evaluating candidate regions from input frames established a model that examines the interaction between
using image fusion. This study intended to improve object humans and objects. This methodology was designed to
detection in surveillance transcriptions by utilizing a cost- detect hazardous incidents by identifying hidden
effective symmetrical dual-camera system, with the goal of dangerous things in possible areas of the human body [31].
reducing false positives. By including brightness-guided To summarize, previous research has mostly focused
preliminary processing, which includes lowering and on categorizing hidden weapons such as firearms, knives,
juxtaposing, throughout both the development and testing and handheld weapons. However, this literature review
phases, a model was created that can accurately detect cold highlights a significant deficiency in the current body of
steel weapons[24].The successful implementation of this knowledge. As far as we know, there has been no study that
979-8-3503-6526-9/24/$31.00 ©2024 IEEE
Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on May 29,2024 at 06:21:30 UTC from IEEE Xplore. Restrictions apply.
2024 2nd International Conference on Networking and Communications (ICNWC)

has thoroughly examined the identification and


differentiation of various types of weapons. This study
tries to fill this gap by offering a complete weapon
detection model that can detect a wider range of
Comparison of weapon categories in relation to prior
research findings, while maintaining a high level of
accuracy.

III. MATERIALS AND METHODS


3.1 Dataset and Pre-Processing
A lack of a consistent dataset for weapon identification and
recognition prompted the development of a distinct dataset
consisting of 5214 weapon photos sourced from the
internet. In order to assure the effectiveness of detecting
and identifying real-life weaponery, the downloaded
photographs were meticulously chosen based on their high
3.2 CNN Model
quality and varied perspectives. An essential component of
the pre-processing stage entailed minimizing extraneous Within the domain of object recognition, the CNN
elements from every weapon image. An individual analysis algorithm has emerged as the most widely employed deep
was conducted, utilizing several computer tools, to learning method. The CNN method, which achieved
improve the quality and relevancy of images. This recognition for its exceptional performance in the 2012
involved performing operations such as padding, Image ImageNet Large-Scale Visual Recognition Competition,
manipulation techniques such as concealment, eliminating went on to be utilized in other fields. This work introduces
backgrounds, expanding, and rotation. After creating a new model (Figure 3.3) that is built upon the VGG-16
individual photographs. Each weapon class was considered model (Figure 3.4). The model comprises 25 layers,
individually, they were collected and arranged into a including convolution, pooling, dropout, rectified linear
dataset. The dataset consisted of monochromatic units (ReLU), flatten, fully connected, and classification
photographs depicting assault rifles, bazookas, grenades, layers, with a total of 337,671 parameters. Figure 3.3
hunting rifles, knives, handguns, and revolvers. The provides information about a convolutional neural network
Python programming language was used to convert each model that has been created. Figure 3.4 provides
image to grayscale format and resize it to dimensions of information regarding the architectural design of the VGG-
144 × 144 pixels. The photos have been classified 16 model.
according to their respective weapon classifications and
arranged accordingly. Figure 3.1 presents graphic
representations of a sample dataset. The specifics of the
weapon dataset, including the categories and quantities of
photographs, are outlined in Table 3.2.

Fig-3.3 Developed convolution neural network model


The decision to utilize the suggested model instead of the
standard VGG-Net model was based on its lower layer
count, which makes it well-suited for training on
inexpensive computers that reduce training time and
processing costs, while continuing to provide excellent
accuracy. The model's architecture comprises two
convolutional layers and an ultimate pooling layer, which
are applied to grayscale input images. The pooling layer
utilizes a 2-step 2 × 2 filter matrix to generate a new matrix
value by shifting across the input matrix. The use of ReLU
activation functions in convolution layers guarantees the
979-8-3503-6526-9/24/$31.00 ©2024 IEEE
Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on May 29,2024 at 06:21:30 UTC from IEEE Xplore. Restrictions apply.
2024 2nd International Conference on Networking and Communications (ICNWC)

efficiency and speed of the network. Applying a dropout was evaluated using a distinct testing dataset to measure its
layer of 25% after each pooling layer helps prevent the ability to generalize. The model's success in weapon
technique of memorization. The techniques of convolution, identification and categorization was quantified using
pooling, and flattening are applied iteratively in succeeding performance indicators such as precision, recall,
layers with varying parameters and filter channel values. efficiency, and F1 score.
The process of organizing neurons into an array involves
flattening and completely linking layers, resulting in a final 3.5 Comparison with Existing Models
layer that is entirely linked and composed of 2048 neurons. To assess the superiority of the given model, a comparative
The classification layer utilizes the softmax activation analysis was conducted against established architectures,
function to produce outputs ranging from 0 to 1, which specifically the VGG-16 model and Residual Network
belong to seven distinctive weapon categories. The greatest (Res-Net) models like ResNet-50 and ResNet-101. The
output value reveals the predicted weapon type. examination considered parameters like reliability,
inaccurate result and efficiency of computation..
3.6 Ethical Considerations
Throughout the process of developing the model, ethical
considerations were given the highest priority. The
utilization of internet-acquired photographs complied with
copyright restrictions, and diligent measures were taken to
ensure the database's inclusiveness and absence of
prejudice. The study focused on promoting the prudent
utilization of technology for surveillance and safety
objectives, taking into account the possible ramifications
on society.

IV. RESULTS AND DISCUSSION


4.1 Model Comparison and Training
The study conducted experiments utilizing seven different
weapon types, comparing the proposed model with well-
established architectures such as VGG-16, ResNet-50, and
ResNet-101 models. The dataset was partitioned into
training (60%), testing (20%), and validation (20%)
subgroups for every single model, following the guidelines
provided in Table 4.1. The models were trained with
consistent parameters, including the activation function
(ReLU), mini-batch size (32), dropout rate (0.25),
3.3 Media and Libraries Used optimization technique (Adamax), and number of epochs
The suggested model was generated using the Keras (30).
library, which has been built on TensorFlow. Additional
libraries, including NumPy, Matplotlib, PIL, Os, OpenCV, TABLE 4.1 WEAPON DATASET DIVISION
Sklearn, and Imageio, were also leveraged. TensorFlow, a
freely available software library, provided as the interface
for training and executing machine learning algorithms.
The Python programming language was utilized to write
the programs for implementing and testing the model. The
model training was executed on a PC that had an Intel Core The evaluation of the VGG-16 model (Figure 4.2a)
i7-9750H 2.60 GHz processor, Nvidia GeForce GTX 1650 revealed a comparatively slower learning rate, achieving a
graphics card, and 8 GB RAM. The primary objective of success accuracy of 90.12% only after 30 epochs. In
the study was to further improve the accuracy, sensitivity, contrast, the ResNet-50 model (Figure 4.2b) demonstrated
and specificity rates by employing Convolutional Neural swift learning and attained a success accuracy rate of
Networks (CNN) while at the same time decreasing the 94.25%. The ResNet-101 model, shown in Figure 4.2c,
loss rates. attained an accuracy of 84.43%, which was considerably
inferior to that of the ResNet-50 model. The model
3.4 Training and Evaluation depicted in Figure 4.2d exhibited rapid learning, achieving
The training method consisted of multiple iterations, where an excellent accuracy of 98.32% after 30 epochs. The
parameters were optimized to minimize the loss function effectiveness of a neural network as a whole depends on its
and assure precise predictions. Evaluation of performance structure and parameter configurations. Despite having
and mitigation of overfitting were accomplished by fewer layers and parameters, the recommended model
employing a validation set. Afterwards, the trained model
979-8-3503-6526-9/24/$31.00 ©2024 IEEE
Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on May 29,2024 at 06:21:30 UTC from IEEE Xplore. Restrictions apply.
2024 2nd International Conference on Networking and Communications (ICNWC)

outperformed the VGG-16, ResNet-50, and ResNet-101


models. The reduced complexity facilitated faster data
processing, training, and testing, leading to enhanced
accuracy in achieving desired outcomes and reduced rates
of failure.

The superiority of the suggested model is apparent in its


accelerated learning, enhanced precision, and lower error
rates. This highlights the increased effectiveness achieved
by optimizing the model architecture for weapon detection.

4.3 Confusion Matrix Analysis


The correctness of the suggested approach was evaluated
by analyzing its effectiveness using a confusion matrix
(Figure 4.4). Remarkably, assault weapons and hunting
rifles demonstrated the highest levels of accuracy in
categorization, achieving a remarkable 99.45%. In
contrast, pistols (94.62%) and bazookas (97.72%)
exhibited noticeably lower rates of success.
The little decline in performance, particularly in
the context of pistols, can be ascribed to the optical
resemblances they bear to firearms. The suggested model's
high overall accuracy of 98.40% demonstrates its
effectiveness in accurately differentiating between
different weapon categories. The proposed model
showcases its ability to achieve exceptional accuracy in
many real-world weapon detection settings.

Fig 4.4 Confusion Matrix Graph


4.4 Performance Evaluation with Real-Life Data

.
4.2 Model Evaluation and Comparison To assess its feasibility, the suggested model underwent
In order to thoroughly assess the models, a meticulous testing utilizing images depicting different categories of
comparison was carried out, taking into account reliability, weaponry being employed against human subjects. The
specificity, sensitivity, and loss factors (Table 4.3).The test data were collected from the web, and the region
proposed model consistently outperformed alternative proposal approach was utilized to validate accuracy.
models, achieving an impressive success rate of 98.40%.
The model outperformed the VGG-16, ResNet-50, and
ResNet-101 models, which achieved success rates of
89.75%, 93.70%, and 83.33% respectively.
979-8-3503-6526-9/24/$31.00 ©2024 IEEE
Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on May 29,2024 at 06:21:30 UTC from IEEE Xplore. Restrictions apply.
2024 2nd International Conference on Networking and Communications (ICNWC)

Fig. 4.5 Region proposal approach

The proposed model demonstrated practical adaptability


through the utilization of the region suggestion approach
[38]. The model successfully identified areas with potential
weapons and achieved a satisfactory accuracy in
appropriately categorizing them.

4.5 Mean Average Precision (mAP Evaluation)


To assess the performance of the proposed model, the
mean average precision (mAP) metric was utilized (Table
4). The mean Average Precision (mAP) scores for each and
every weapon class serve as a clear indication of the
model's efficacy in accurately detecting and categorizing
weapons. The mean average precision (mAP) routinely
surpasses 97%, demonstrating the model's high accuracy
in detecting various weapon categories in real-world
situations.

4.6 Sample Test Results


The effectiveness of the suggested approach for multiple
armament categories is demonstrated by the empirical
investigation's results (Figure 7). The images exemplify the
model's proficiency in precisely categorizing firearms in
various circumstances and environments, hence
strengthening its suitability for real-world use. This project
entailed the creation of an artificial intelligence model
utilizing deep learning methodologies.The model was
specifically engineered to autonomously handle safety
measures and possesses the capability to identify and
categorize seven unique varieties of firearms. The
In summary, the created model is a very efficient
proposed model outperformed existing models, such as
tool for independent security systems, demonstrating the
VGG-16, ResNet-50, and ResNet-101, achieving an
capability to improve safety measures in various situations.
excellent accuracy rate of 98.40%. The model's decreased
The study highlights the significance of effective and
total number of layers and parameters resulted in improved
precise weapon detection models, which contribute to the
processing, training, and testing speed, rendering it a viable
solution for real-time weapon identification. wider domain of machine vision for safety purposes.
The model's ability to detect potential weapon
locations was assessed using an area suggestion method,
which examined its practical adaptability. The mAP values
provided additional validation of the model's precision in
categorizing various weapon classes. The suggested model
demonstrated its superiority through a comparison with
current studies, exhibiting greater accuracy rates in
comparison to similar firearm detection systems. The
model's ability to detect many weapon types
simultaneously makes it an advanced solution for safety
and monitoring applications.

979-8-3503-6526-9/24/$31.00 ©2024 IEEE


Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on May 29,2024 at 06:21:30 UTC from IEEE Xplore. Restrictions apply.
2024 2nd International Conference on Networking and Communications (ICNWC)

V. CONCLUSION [2] Bhagyalakshmi, P., Indhumathi, P., &Bhavadharini,


In the current context of increasing illegal activity, it is L. R. (2019). Live video monitoring for automated
crucial to have the ability to independently detect and identification of fire arms. International Journal of
identify firearms on individuals by analyzing surveillance Trend in Scientific Research and Development
footage, without the need of individual involvement. (IJTSRD), 3(3).
Precise categorization and identification of weapons are [3] Lim, J., Al Jobayer, M. I., Baskaran, V. M., Lim, J.
crucial for effectively anticipating and deterring criminal M., Wong, K., & See, J. (2019, November). Utilizing
activities, allowing relevant authorities to take preventive deep neural networks to detect firearms in surveillance
steps. Portable or handled firearms are important tools in a videos. In 2019 Asia-Pacific Signal and Information
variety of illicit operations, such as stealing, illegal Processing Association Annual Summit and
hunting, and crimes of violence. It is crucial to recognize Conference (APSIPA ASC) (pp. 1998-2002). IEEE.
these weapons in order to predict potential criminal [4] Yuan, J., & Guo, C. (2018, June). An advanced neural
elements in security camera photographs and provide network approach for identifying hazardous
immediate and necessary actions. This work introduces an machinery. In 2018 Eighth International Conference
innovative model specifically created to identify and on Information Science and Technology (ICIST) (pp.
categorize seven different types of weapons, utilizing the 159-164). IEEE.
VGG-Net architecture. A novel dataset comprising these [5] Ilgin, F. Y. (2020). Utilizing copulas for cognitive
weapon categories was compiled for the aim of training radios, we implement energy-based spectrum
and assessing. Comparative analyses were performed sensing.. Bulletin of the Polish Academy of Sciences.
using established models, namely VGG-16 (with a success Technical Sciences, 68(4), 829-834.
accuracy of 89.75%), ResNet-50 (with a success accuracy [6] Navalgund, U. V., & Priyadharshini, K. (2018,
of 93.70%), and ResNet-101 (with a success accuracy of December). Deep learning-based approach for
83.33%). These analyses demonstrated the superior detecting criminal intentions. In 2018 International
effectiveness of the proposed model, which achieved an Conference on Circuits and Systems in Digital
excellent success rate of 98.40%. Additional assessments Enterprise Technology (ICCSDET) (pp. 1-6). IEEE.
entailed doing tests on the model using photos depicting [7] Chandan, G., Jain, A., & Jain, H. (2018, July).
humans carrying firearms. The utilization of the region Utilizing Deep Learning and OpenCV, this system
suggestion approach facilitated the generation of novel enables real-time detection and tracking of objects.
images, hence enabling the creation of realistic testing In 2018 International Conference on inventive
scenarios. The model demonstrated exceptional research in computing applications (ICIRCA) (pp.
performance by precisely identifying weapon photos 1305-1308). IEEE.
against various backdrops. Overall, the suggested model [8] Deng, L., & Yu, D. (2014). Exploration of deep
serves as a significant resource for addressing security learning techniques and their practical uses.
weaknesses and improving the effectiveness and efficiency Foundations and trends® in signal processing, 7(3–
of security personnel in different contexts. The study's 4), 197-387.
conclusions can be directly applied to different [9] Bengio, Y. (2009).Studying complex structures for
circumstances, which increases its uniqueness and artificial intelligence. Foundations and trends® in
potential impact. The expected contributions are providing Machine Learning, 2(1), 1-127.
guidance and inspiration for similar research, namely in the [10] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep
field of independent security units. Future research could learning. nature, 521(7553), 436-444.
prioritize the development of a robust infrastructure for [11] Song, H. A., & Lee, S. Y. (2013). NMF is used to
autonomous robot warriors that possess the capability to create a hierarchical representation.. In Neural
autonomously detect and interpret data that is entered. Information Processing: 20th International
These automated entities possess the capability to Conference, ICONIP 2013, Daegu, Korea, November
promptly alert security troops, examine the information in 3-7, 2013. Proceedings, Part I 20 (pp. 466-473).
immediate form utilizing security surveillance systems, Springer Berlin Heidelberg.
and enhance categorization precision. Additionally, future [12] Masood, S., Ahsan, U., Munawwar, F., Rizvi, D. R.,
research efforts could prioritize the identification of & Ahmed, M. (2020). Image scene recognition using
firearms with a coating, thereby enhancing the capabilities a convolutional neural network. Procedia Computer
of weaponry detection equipment. Science, 167, 1005-1012.
[13] Masood, S., Ahsan, U., Munawwar, F., Rizvi, D. R.,
REFERENCES & Ahmed, M. (2020). Video scene recognition using
[1] Raturi, G., Rani, P., Madan, S., & Dosanjh, S. (2019, a convolutional neural network. Procedia Computer
November). ADoCW: An automated technique for Science, 167, 1005-1012.
identifying hidden weapons. In 2019 Fifth [14] Sai, B. K., & Sasikala, T. (2019, November). The task
International Conference on Image Information involves utilizing the Tensorflow object identification
Processing (ICIIP) (pp. 181-186). IEEE. API to find and quantify things in a picture.. In 2019

979-8-3503-6526-9/24/$31.00 ©2024 IEEE


Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on May 29,2024 at 06:21:30 UTC from IEEE Xplore. Restrictions apply.
2024 2nd International Conference on Networking and Communications (ICNWC)

International Conference on Smart Systems and semantic neural networks.In 2019 European
Inventive Technology (ICSSIT) (pp. 542-546). IEEE. Intelligence and Security Informatics Conference
[15] Verma, G. K., & Dhillon, A. (2017, November). A (EISIC) (pp. 70-77). IEEE.
portable firearm detection system utilizing the faster [27] Abdallah, H. B., Abdellatif, T., &Chekir, F. (2018).
region-based convolutional neural network (Faster R- AMSEP is an automated system that manages security
CNN) deep learning algorithm.. In Proceedings of the at multiple levels for multimedia event processing.
7th international conference on computer and Procedia computer science, 134, 452-457.
communication technology (pp. 84-88). [28] Grega, M., Matiolański, A., Guzik, P., &Leszczuk, M.
[16] Warsi, A., Abdullah, M., Husen, M. N., & Yahya, M. (2016). Automated identification of images including
(2020, January). Review of algorithms for automatic firearms and knives. Sensors, 16(1), 47. in a CCTV
detection of handguns and knives. In 2020 14th [29] Tiwari, R. K., & Verma, G. K. (2015). A framework
International Conference on Ubiquitous Information utilizing computer vision and the Harris interest point
Management and Communication (IMCOM) (pp. 1- detector for the purpose of visually detecting guns.
9). IEEE. Procedia Computer Science, 54, 703-712.
[17] Olmos, R., Tabik, S., & Herrera, F. (2018). Utilizing [30] Xu, Z., Tian, Y., Hu, X., & Pu, F. (2015, September).
deep learning to recognize automatic handguns in Human event understanding through the use of a
videos and trigger an alarm. Neurocomputing, 275, human-object interaction paradigm, which might be
66-72. potentially hazardous. In 2015 IEEE International
[18] Asnani, S., Ahmed, A., &Manjotho, A. A. (2014). A Conference on Signal Processing, Communications
bank security system is developed utilizing a method and Computing (ICSPCC) (pp. 1 5). IEEE.
called Weapon Detection using Histogram of Oriented [31] Angelova, A., Krizhevsky, A., &Vanhoucke, V.
Gradients (HOG) Features.. Asian Journal of (2015, May). Utilizing a deep network with a wide
Engineering, Sciences & Technology, 4(1). field of vision for the purpose of detecting pedestrians.
[19] Lai, J., & Maples, S. (2017). Creating a classifier that In 2015 IEEE international conference on robotics and
can detect guns in real-time. Course: CS231n, automation (ICRA) (pp. 704-711). IEEE.
Stanford University.
[20] Simonyan, K., & Zisserman, A. (2015). Deep
convolutional networks for large-scale image
recognition with significant depth. In: Proceedings of
International Conference on Learning
Representations.
[21] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Image
recognition using deep residual
learning.In Proceedings of the IEEE conference on
computer vision and pattern recognition (pp. 770-
778).
[22] Olmos, R., Tabik, S., Lamas, A., Perez-Hernandez, F.,
& Herrera, F. (2019). An strategy for decreasing false
positives in handgun identification with deep learning
by fusing binocular images. Information Fusion, 49,
271-280.
[23] Castillo, A., Tabik, S., Pérez, F., Olmos, R., &
Herrera, F. (2019). Preprocessing based on brightness
enables the automatic detection of cold steel weapons
in surveillance films using deep learning.
Neurocomputing, 330, 151-161.
[24] Ineneji, C., &Kusaf, M. (2019). A hybrid weapon
identification technique is proposed, which combines
material testing with a fuzzy logic system.Computers
& Electrical Engineering, 78, 437 448.
[25] [25] Dwivedi, N., Singh, D. K., & Kushwaha, D. S.
(2019, December). Deep convolutional neural
networks for weapon categorization. In 2019 IEEE
Conference on Information and Communication
Technology (pp. 1-5). IEEE.
[26] Egiazarov, A., Mavroeidis, V., Zennaro, F. M.,
&Vishi, K. (2019, November). Detection and
segmentation of firearms using a combination of
979-8-3503-6526-9/24/$31.00 ©2024 IEEE
Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on May 29,2024 at 06:21:30 UTC from IEEE Xplore. Restrictions apply.

View publication stats

You might also like