0% found this document useful (0 votes)
75 views

Design and Implementation of A Smoke/Fire Detection Using Computer Vision and Edge Computing

In Nigeria, over 2,000 fire outbreaks reported resulted in ₦1 trillion worth of property damages. Likewise, a record of 31 market fires was reported in various states in the country. These numbers point to the dangers posed by fires and the need of a faster detection approach. This paper presents a computer vision-based smoke and fire detection system that is run on an edge server.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Design and Implementation of A Smoke/Fire Detection Using Computer Vision and Edge Computing

In Nigeria, over 2,000 fire outbreaks reported resulted in ₦1 trillion worth of property damages. Likewise, a record of 31 market fires was reported in various states in the country. These numbers point to the dangers posed by fires and the need of a faster detection approach. This paper presents a computer vision-based smoke and fire detection system that is run on an edge server.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Design and Implementation of a Smoke/Fire


Detection using Computer Vision and
Edge Computing
Nwosu Ifeoma .L.1; Alagbu Ekene E.2*; Ezeagwu Christopher O.3; Okafor Chukwunenye S.4; Obiadi Ifeanyi P.5
1,2,3,4
Department of Electronic and Computer Engineering Nnamdi Azikiwe University Awka Anambra State Nigeria

Corresponding Author:- Alagbu Ekene E.2*

Abstract:- In Nigeria, over 2,000 fire outbreaks reported I. INTRODUCTION


resulted in ₦1 trillion worth of property damages.
Likewise, a record of 31 market fires was reported in Fire is one of the most devastating natural disasters,
various states in the country. These numbers point to the causing significant property damage and loss of lives each
dangers posed by fires and the need of a faster detection year. Although fires serve as a benefit for human activities
approach. This paper presents a computer vision-based such as cooking, when left untended, it poses a danger that
smoke and fire detection system that is run on an edge also threatens people’s livelihood. According to available
server. The proposed system consists of a custom statistics by Abeku et al (2021), in Nigeria, 31 market fires
convolutional neural network (CNN) model which is was documented within a period of 18 months between 2020
utilized to extract features in image frames to identify - 2021. Likewise, Daily Trust (2022) reported a total of 53
fire and smoke occurrences. The k-fold cross validation fire cases (in both indoor and outdoor environments)
algorithm has been proved on a simplified CNN model between January to March 2022 in which 19 persons lost
which has a small number of layers in order to improve their lives. It results in significant property damages such as
the performance of the image classification. The the ₦1 trillion worth of damage in Nigeria in 2022
experimental analysis of the model shows that the (Vanguard, 2023) and $1 billion damages as a result of
proposed system is capable of classifying fire images manufacturing and industrial fires in the US (NFPA, 2019).
accordingly with an ROC value of over 0.67 in each These fire occurrences are often attributed to faulty
class. This model is recommended for use in deep electrical wiring, storage of combustive fuels in an indoor
learning tasks that require automatic feature extraction environment and improper disposal of cigarette stubs. It
and object detection in image processing applications. could also be as a result of dry weather or an intentional act
(arson), but no matter the cause it is important to curb fire
outbreaks in time because they are proving a constant threat
to people's safety, their possessions, and the environment.

Fig 1 Market Fire Outbreak Frequency in Nigeria within 2019 and 2020 (Orodata, 2020)

IJISRT24JAN1596 www.ijisrt.com 1978


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Traditional fire detection approaches, fire alarms and the collapse of various business, emotional trauma, litigation
smoke sensors, serve the purpose of alerting building damage of a brand reputation and financial loss. Smoke, a
residents to imminent fire dangers once the fire or smoke product of fire, is also known to cause instantaneous loss of
has reached a certain threshold value. This quality makes the lives. Long-term exposure to smoke and other fumes
use of these devices not well suited for environments where produced by fires can cause several health issues such as
the fire might go unnoticed for a while and still cause breathing difficulties and eye irritation.
significant destruction. Thus, they are not effective to serve
as early warning systems for hazardous environments and Thus, it is of great importance to address these issues
remote areas (Avazov et al, 2023). These systems also have as it would mitigate the incurred losses and also lead to the
other limitations like false positives and device malfunction, development of a more reliable and efficient fire detection
and these restrictions have the potential to severely impair approach.
their efficiency, which could have disastrous effects in
emergency situations.  Aim
The aim of this work is to develop and implement a
In recent years, advances in computer vision research smoke/fire detection system using computer vision and edge
have prompted the use of surveillance camera data for computing.
automatic detection and object recognition. Computer
vision, a sub-field of artificial intelligence, allows computers II. LITERATURE REVIEW
to extract information from images and videos through the
use of machine learning algorithms. This has proven to have Recently, various approaches proposed by researchers
a versatile application for scenarios such as accident and have explored the use of machine learning and deep learning
disaster detection in airports, industries and other open algorithms in the detection of fire and smoke. They explored
environments, traffic monitoring and control in smart cities the use of CNNs and its variants (Kukuk and Kilimci, 2021)
and home security (Alamgir, 2020). The limitations of and the use of object detection models such as YOLO, SSD
conventional fire detection systems can be effectively and Faster R-CNN (Zheng et al, 2022) in the identification
addressed by utilizing computer vision. The surveillance of wildfires occurrences and their spread. The difficulties in
cameras, which are already mounted for security purposes, algorithm research for vision-based methods can be
can be used to monitor the surrounding environments for attributed to the atypical nature fire flame and smoke (Lee
fire occurrences. It also offers an additional advantage and Shim, 2019).
because it can be used to verify the intensity and nature of
an alarm, if an alarm is false or real, without requiring a The research by Sheng et al (2021) proposed a statistic
physical presence. Edge computing is a computing paradigm image feature-based deep belief network (DBN) for fire
that brings processing and data storage closer to the devices detections. DBN was utilized could automatically learn fault
where data is generated (Liang et al, 2022). The limitations features in multiple fire stages. layer by layer using
of conventional fire detection systems can be effectively restricted Boltzmann machine (RBM). Yavuz Selim et al
addressed by the combination of computer vision and edge (2021), in their paper, utilized transfer learning method
computing technologies. By deploying computer vision using the Inception V3, SequeezeNet, VGG16 and VGG19.
algorithms at the edge, visual data can be analysed quickly. Their detection process was split into 3 stages: first, the
This enhances the system's ability to perform analysis flame region extraction using basic image processing
instantly without depending on remote processing centres. algorithms. Next, mobility of the flame is analysed by
comparing the video frames of the fire image. Afterward the
This research proposes the use of these technologies to training of the CNN, their model showed a 98.8%
develop a smoke and fire detection system with improved classification success on the Inception V3 architecture.
detection capabilities. The proposed system consists of a Another research examined the development of an energy-
custom CNN model which is utilized to extract features in efficient system based on CNN for early smoke detection in
image frames to identify smoke and fire occurrences. both normal and foggy IoT environments (Khan et al, 2019).
Ren et al (2021) proposed an intelligent detection
 Problem Statement technology, using fuzzy logic reasoning, for electric fires
The problem to be addressed through this study is the based on multi-information fusion for green buildings. An
issue of the traditional fire detectors in urban environments. implementation of the machine learning wildfire detection
These devices, which are commonly in use, suffer from a and alert system by Ranadive et al (2022) is currently being
few disadvantages. Firstly, they have less efficiency in high used in the USA.
ceiling buildings and open areas such as factories, markets
and warehouses. The buildup of the flames and smoke has to On the basis of object detection models, Mukhiddinov,
be directly beneath the detector and at a certain level in Abdusalomov, and Cho (2022) presented a vision-based fire
order to trigger an alarm or response signal. Also, they detection and notification system that utilized smart glasses
require consistent maintenance to ensure reliability. and deep learning models for blind and visually impaired
(BVI) people. Their model employed an improved YOLOv4
These problems when not properly put in check can model with a convolutional block attention module.
lead to great damage of properties and loss of lives. In most Another study by Xue et al (2022) proposed an improved
recent times, the occurrence of market fires has led also to forest fire small-target detection model based on YOLOv5

IJISRT24JAN1596 www.ijisrt.com 1979


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
architecture. Their model showed an improved performance 𝑖𝑚𝑔
𝑖𝑚𝑔𝑝 = 255
(1)
with the mAP metric increasing by over 10.1%. The study
by Saponara, Elhanashi, and Gagliardi (2021) presented a
 CNN Mathematical Model
real-time video-based fire and smoke detection using
This section of the CNN is in charge of extracting the
YOLOv2 Convolutional Neural Network (CNN) in antifire
features in each image for easier detection across the layers.
surveillance systems. Their work was deployed in a low-cost
It comprises of the convolutional layer and the pooling
embedded device (Jetson Nano), which was composed of a
layer.
single, fixed camera per scene, working in the visible
spectral range. Xu, H. et al (2022) presented Light-YOLOv5
which modified the YOLOv5 architecture by altering some  Convolutional Layers
modules in the network and introducing a global attention Colored images of 150 × 150 × 3 pixels are fed as the
mechanism (GAM) for effective feature extraction. input for the model. The input image is subjected to 16
filters with a size of 3 × 3, producing 16 feature maps in the
Several studies utilized the ensemble learning approach first convolution layer. The features are extracted during the
in order to improve the detection accuracy of the CNN convolution process using the filter. The feature map, F is
models being adopted. An example of such system can be represented by the convolution operation between the input
found in the research by Xu, R. et al (2021). Their work image, M and the filter, T as shown in equation 3.2.
utilized two individual learners (YOLOv5 and EfficientDet)
for the detection process and another learner (EfficientNet) 𝐹[𝑖, 𝑗] = (𝑀 ∗ 𝑇)[𝑖,𝑗] (2)
for learning global information in order to minimize false
positive detection. This resulted in a decrease of false The 𝑖𝑗-th entry of the feature maps is as shown in
positives by 51.3%. The study by Almoussawi et al equation 3.3
proposed a CNN-AE based pipeline for classification and
ℎ 𝑤
verification of fire-related images. Nguyen et al (2021) 𝑓[𝑖, 𝑗] = ∑𝑥𝑚 ∑𝑦 𝑚 ∑𝑛𝑧 𝑀[𝑥,𝑦,𝑧] 𝑇[𝑖 + 𝑥 − 1, 𝑗 + 𝑦 − 1, 𝑧] (3)
proposed a CNN-LSTM network for fire detection and Grari
et al (2022) implemented a regression ensemble learning  Pooling Layers
model for predicting fires utilizing NASA’s FIRMS dataset. In the max pooling layer, a pool size of 2 × 2 pixels is
used to choose the maximum activations of these 16 feature
Based on the literature, the research in smoke and fire maps with a stride of 3 × 3 pixels. The stride indicates how
detection still requires improvements in the design and far the pooling matrix moves for each pooling step; this
implementation of a simplified deep learning model to results in a reduction in the size of the feature maps. The
efficient detection and the provision of a diverse dataset for pooling layer ensures that the most relevant details (the
detection, Likewise, there are a very few studies that focus maximum values) are kept while removing the less
on monitoring ongoing flames and smoke in near real-time significant ones (the minimum values).
using deep learning methods. As a consequence, we want to
continue our study in this field and enhance our findings. 𝑃 = 𝜙𝑝 (𝑀 ∗ 𝑇) (4)
This study proposes the application of the k-fold cross
validation CNN model to rapidly identify fire occurrences Where 𝜙𝑝 is the pooling function. The dimension of
with a low rate of false positives. It further explores the use the pooling layer is gotten from the formula in equation 3.5,
of the CNN and OpenCV real-time computer vision
where ℎ𝑚 × 𝑤𝑚 represents the dimensions of the input, ℎ𝑡 ×
algorithm to avert numerous fire outbreaks which makes the
𝑤𝑡 represents the dimensions of the filter, 𝑠 is the stride
research novel for study.
length and 𝑛 is the number of channels in the input.
III. MATERIALS AND METHODS ℎ𝑚 +1−ℎ𝑡 𝑤𝑚 +1− 𝑤𝑡
dim 𝑜𝑓 𝑃 = ( 𝑠
)×( 𝑠
)×𝑛 (5)
This section explores the details of the custom CNN
model created for the purpose of the smoke and fire  Flattening Layer
detection system. The Python Keras library was used in the The classification process of the CNN starts off with a
creation of a sequential CNN model utilized for the feature flattening layer which reduces its input (the stacked output
extraction and classification process. A custom dataset was of the convolutional and pooling layers) into a 1-
curated for training and testing of the model. The data was dimensional shape for ease of computation. Its output is
gathered from Internet related image tags that related to fire passed to the Dense layers.
in buildings and other structures in an urban setting.
 Dense Layer
 Image Preprocessing The first dense layer has 32 classes and the second one
It involves image resizing (adjusting the height and (the last layer of the CNN) has four classes (i.e., the
width of the image). The functions from Python OpenCV’s classification in the dataset). The mathematical operation of
library were used in this task to resize the test images to a the dense layer is as given in equation 3.6 where 𝜙𝑑 is the
size of 150 x 150 pixels. Then the images are normalized activation function of the dense layer, 𝑃 is the input to the
using equation (1). This process adjusts the brightness of the layer, 𝑤 is the kernel and 𝑏 is the bias of the layer.
image and ensures that the image pixels are in [0, 1] range.

IJISRT24JAN1596 www.ijisrt.com 1980


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
𝑧 = 𝜙𝑑 (∑𝑖 𝑤𝑖 𝑃𝑖 + 𝑏) (6)  Load and pre-process the data:

 Activation Functions  Get the list of all images and their corresponding labels
The ReLU activation function is used for all the layers from the data directory.
except the last dense layer which uses a Softmax activation  Convert labels to integers and then one-hot encode them.
function. The ReLU function in this work is the standard  Convert one-hot encoded labels to single labels.
function which selects an element-wise maximum of 0 and
the input data. It introduces a bias value, 𝑏 on the  Perform k-fold Cross-Validation:
convoluted output of each layer and it is calculated as seen
in Equation 3.7, where 𝑐 is the output at the layer where the  For each fold in the k-fold Cross-Validation:
activation function is applied.
 Initialize a Sequential model and add layers to it. This
𝑐 = 𝜙𝑎 (𝑀 ∗ 𝑇 + 𝑏) (7) includes Conv2D, MaxPooling2D, Flatten, Dense, and
Dropout layers.
The Softmax function, used in the last dense layer, acts  Compile the model with Adam optimizer, categorical
as a classifier because it returns a vector as a probability cross-entropy loss, and accuracy metrics.
value and the elements of its output vector results as 1. In  Load and pre-process the training and test images.
this case, we have four probability classifications for the  Fit the model on the training data and validate it on the
model. The Softmax activation is performed using equation test data.
3.9 where 𝑧 is the output of the previous dense layer.  Use the model to predict the test set and store the
predictions and actual labels in the lists.
𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑧) =
exp (𝑧)
(8)  Save the model in the list of models.
∑ exp(𝑧)

 Calculate the average accuracy and loss at each epoch.


 Algorithm and Flowchart These implementation steps serve to ensure that the
The model is built with the following algorithmic CNN is trained on a diverse set of data and that its operation
steps: is evaluated on key metrics such as the loss and accuracy. A
diagrammatic flow of the design process detailing the
 Import necessary libraries and modules: This includes interconnections of the model design can be seen in Figure
Keras, sklearn, os, numpy, and cv2. 2.
 Initialize necessary lists: This includes a list to store
models, histories, test predictions, and test labels.

IJISRT24JAN1596 www.ijisrt.com 1981


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

Fig 2 Flowchart

IJISRT24JAN1596 www.ijisrt.com 1982


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
IV. RESULT AND ANALYSIS The model is a multi-class classification algorithm that
classifies into one of four categories; thus a 4 x 4 confusion
This section provides a detailed overview of the matrix is generated which shows the number of actual labels
performance of the k-fold cross-validation CNN model on against the number of predicted labels in each class (Figure
the image dataset. It is evaluated on the performance of key 3). The images are classified into any of the four classes
metrics of a deep learning model. The CNN model was namely smoke (class 0), fire (class 1), fire and smoke (class
implemented using Python TensorFlow and Keras 2) and neither (class 3). The diagonal of the matrix
platforms. Training and validation were performed on a represents the number of correctly predicted images in each
Lenovo PC with Windows 11 Pro 64-bit OS, 11th Gen class. The overlapped region between classes in the matrix
Intel® Core i5 Processor with a frequency rate of 2.40GHz represent the number of misclassified images between the
and 8GB RAM size. classes and it is worth noting that the model learned the
differences between fire/smoke images from normal images
with smoke images being correctly classified in 580 images.

Fig 3 The Confusion Matrix of True and Predicted Labels of the Proposed Model

The accuracy of our model over the training and validation data is evaluated as shown in Figure 4. A total of 25 epochs was
used to train the CNN model and a maximum accuracy of over 60% was recorded. The model is then evaluated in terms of the
loss metric. The model produces a minimal loss of 0.8 after training due to implementation of thee cross-validation algorithm and
the Adam optimizer function.

Fig 4 (a) Graph Depicting a Gradual Increase in Training and Validation Accuracy (b) Graph Depicting a Steady Reduction for
the Training and Validation Loss for the CNN Model

IJISRT24JAN1596 www.ijisrt.com 1983


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
The ROC curves for the model shows the varying AUC values of the classes defined in the code. Thus, the AUC values of
0.67, 0.70, 0.76 and 0.69 correspond to classes 0, 1, 2 and 3 respectively. This implies that the model has a 67% probability of
classifying images in class 0, 70 % for class 1, 76% for class 2 and 69% for class 3.

Fig 5 Graph Depicting the ROC Curves of the Proposed Model with Specified AUC Values of each Classification Category as it
Relates to (a) Class 0 (b) Class 1 (c) Class 2 (d) Class 3 of the CNN Model

 Performance Analysis of the Cross-Validation Algorithm


The cross validation is implemented for the model since it has few images for training the neural network. This is achieved
through k-fold cross validation with five folds in order to reduce the computational complexity. The performance analysis of the
model in terms of the training accuracy and loss for the different folds of the algorithm is depicted in the graphs in Figure 6. It
shows a steady improvement in the accuracy and loss metrics across the training epochs and the average of these values were
computed to obtain the final performance of the system.

Fig 6 Accuracy and loss performance of the CNN across the folds of the cross-validation algorithm after

IJISRT24JAN1596 www.ijisrt.com 1984


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
V. DISCUSSION REFERENCES

While there are a lot of false positives and false [1]. Abeku, T., Michael, D., Akpan-Nsoh, I., Udeajah, G.,
negatives in the case of smokes, the custom convolutional Ogugbuaja, C., Oyewole, R., Godwin, A., &
neural network has a high accuracy rate for detecting flames Agboluaje, R. (2021). Losses to market fires hit
in both indoor and outdoor contexts. Efficient early smoke N41.54 billion in two years. The Guardian
detection is another feature of the developed model. To Newspaper. https://guardian.ng/news/losses-to-
construct a simulated alarm system setting, play sound market-fires-hit-n41-54-billion-in-two-years/
module of the OpenCV library was utilized. The "Alarm [2]. Alamgir, N. (2020). Computer Vision Based Smoke
Sound.mp3" mp3 audio file was linked to the deep learning and Fire Detection for Outdoor Environments.
model and access to the webcam is also granted to the Published PhD Dissertation, School of Electrical
program. Once smoke or a potential fire is detected through Engineering and Robotics, Science and Engineering
the captured image frames, the alarm sound is played and Faculty, Queensland University of Technology.
the user is alerted to the impending danger. https://eprints.qut.edu.au/201654/1/Nyma_Alamgir_
Thesis.pdf
VI. CONCLUSION [3]. Avazov, K., Hyun, A. E., Sami S, A. A., Khaitov, A.,
Abdusalomov, A. B., & Cho, Y. I. (2023). Forest fire
This research presented a novel computer-vision based detection and notification method based on AI and
fire and smoke detection system that is running on an edge IoT approaches. Future Internet, 15(2), 61.
server. The proposed system aims to overcome the issues of doi:10.3390/fi15020061
fire and smoke detection in two stages: First, the captured [4]. Daily Trust (2022). How Fire Wreaked 700 shops,
image is pre-processed so as to highlight the key features Killed 19 in 3 months. Daily Trust.
present in the image. This was achieved using the OpenCV https://dailytrust.com/how-fire-wreaked-700-shops-
library implementation in Python programming language. killed-19-in-3-months/
Afterwards, the CNN model is trained on the processed [5]. Grari, M., Idrissi, I., Boukabous, M., Moussaoui, O.,
image in order to properly classify it into the related class Azizi, M., & Moussaoui, M. (2022). Early wildfire
(fire, smoke and neither). The CNN model was built using detection using machine learning model deployed in
Python programming and the deep learning framework, the fog/edge layers of IoT. Indonesian Journal of
Keras, was utilized to ensure ease of implementation. Electrical Engineering and Computer Science, 27(2),
1062. doi:10.11591/ijeecs.v27.i2.pp1062-1073
The use of a k-fold cross validation algorithm has been [6]. Khan, S., Muhammad, K., Mumtaz, S., Baik, S. W.,
proved on a simplified CNN model which has a small & de Albuquerque, V. H. C. (2019). Energy-efficient
number layers in order to improve the performance of the deep CNN for smoke detection in foggy IoT
image classification. The experimental analysis of the model environment. IEEE Internet of Things Journal, 6(6),
show that the proposed system is capable of classifying fire 9237-9245.
and smoke images accordingly with an ROC value of over [7]. Kukuk, S. B., & Ki̇ li̇ mci̇ , Z. H. (2021).
0.67 in each class. Likewise, the accuracy is observed to Comprehensive analysis of forest fire detection using
increase by 15% across each fold of the training process. deep learning models and conventional machine
This model is recommended for use in deep learning tasks learning algorithms. International Journal of
that require automatic feature extraction and object detection Computational and Experimental Science and
in image processing applications. Engineering, 7(2), 84–94.
doi:10.22399/ijcesen.950045
 Future Research on the Topic can be Carried Out in [8]. Lee, Y., & Shim, J. (2019). False positive
Areas Pertaining to: decremented research for fire and smoke detection in
surveillance camera using spatial and temporal
 The investigation of the use of other deep architectures features based on deep learning. Electronics, 8(10),
like Deep Belief Networks and Recurrent Neural 1167. doi:10.3390/electronics8101167
Network for the detection of smoke and fire. [9]. Liang, S., Wu, H., Zhen, L., Hua, Q., Garg, S.,
 The fire detection using a moving camera. Kaddoum, G., Hassan, M. M., & Yu, K. (2022). Edge
 The monitoring of the fire spread and direction of YOLO: Real-time intelligent object detection system
movement. based on edge-cloud cooperation in autonomous
vehicles. IEEE Transactions on Intelligent
Transportation Systems, 23(12), 25345-25360.

IJISRT24JAN1596 www.ijisrt.com 1985


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[10]. Mukhiddinov, M., Abdusalomov, A. B., & Cho, J.
(2022). Automatic fire detection and notification
system based on improved YOLOv4 for the blind and
visually impaired. Sensors (Basel, Switzerland),
22(9), 3307. doi:10.3390/s22093307
[11]. NFPA (2019). Fires in U.S. Industrial or
Manufacturing Properties.
https://www.vosslawfirm.com/blog/fire-statistics-for-
industrial-and-manufacturing-properties.cfm
[12]. Nguyen, M. D., Vu, H. N., Pham, D. C., Choi, B., &
Ro, S. (2021). Multistage real-time fire detection
using convolutional neural networks and long short-
term memory networks. IEEE Access: Practical
Innovations, Open Solutions, 9, 146667–146679.
doi:10.1109/access.2021.3122346
[13]. Ranadive, O., Kim, J., Lee, S., Cha, Y., Park, H.,
Cho, M., & Hwang, Y. K. (2022). Image-based Early
Detection System for Wildfires. arXiv preprint
arXiv:2211.01629.
[14]. Ren, X., Li, C., Ma, X., Chen, F., Wang, H., Sharma,
A., Gaba, G. S., & Masud, M. (2021). Design of
multi-information fusion based intelligent electrical
fire detection system for green buildings.
Sustainability, 13(6), 3405. doi:10.3390/su13063405
[15]. Saponara, S., Elhanashi, A., & Gagliardi, A. (2021).
Real-time video fire/smoke detection based on CNN
in antifire surveillance systems. Journal of Real-Time
Image Processing, 18(3), 889–900.
doi:10.1007/s11554-020-01044-0
[16]. Sheng, D., Deng, J., Zhang, W., Cai, J., Zhao, W., &
Xiang, J. (2021). A Statistical Image Feature-Based
Deep Belief Network for Fire Detection, Complexity,
vol. 2021, 1-12, doi: 10.1155/2021/5554316
[17]. Vanguard (2023). Nigeria recorded 2,056 fire
incidents, N1trn losses in 2022 – GOC.
https://www.vanguardngr.com/2023/03/nigeria-
recorded-2056-fire-incidents-n1trn-losses-in-2022-
goc/
[18]. Xu, H., Li, B., & Zhong, F. (2022). Light-YOLOv5:
A lightweight algorithm for improved YOLOv5 in
complex fire scenarios. Applied Sciences, 12(23),
12312.
[19]. Xu, R., Lin, H., Lu, K., Cao, L., & Liu, Y. (2021). A
forest fire detection system based on ensemble
learning. Forests, 12(2), 217. doi:10.3390/f12020217
[20]. Xue, Z., Lin, H., & Wang, F. (2022). A small target
forest fire detection model based on YOLOv5
improvement. Forests, 13(8), 1332.
doi:10.3390/f13081332
[21]. Yavuz Selim, T., Koklu, M., & Altin, M. (2021). Fire
Detection in Images Using Framework Based on
Image Processing, Motion Detection and
Convolutional Neural Network. International
Journal of Intelligent Systems and Applications in
Engineering (IJISAE), 9(4), 171-177.
[22]. Zheng, X., Chen, F., Lou, L., Cheng, P., & Huang, Y.
(2022). Real-time detection of full-scale forest fire
smoke based on deep convolution neural network.
Remote Sensing, 14(3), 536. doi:10.3390/rs14030536

IJISRT24JAN1596 www.ijisrt.com 1986

You might also like