0% found this document useful (0 votes)
26 views

Real-Time Fire Hazard Monitoring with Deep Learning

The document presents a real-time fire hazard monitoring system utilizing the YOLOv8 deep learning model, achieving a mean Average Precision (mAP) of 91.3% for fire detection. The system processes video inputs, detects fire, logs details, and sends automated alerts to authorities via SMS, enhancing response times and accuracy compared to traditional methods. The study emphasizes the importance of integrating computer vision for effective fire monitoring and prevention, backed by a comprehensive dataset and robust training methodology.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Real-Time Fire Hazard Monitoring with Deep Learning

The document presents a real-time fire hazard monitoring system utilizing the YOLOv8 deep learning model, achieving a mean Average Precision (mAP) of 91.3% for fire detection. The system processes video inputs, detects fire, logs details, and sends automated alerts to authorities via SMS, enhancing response times and accuracy compared to traditional methods. The study emphasizes the importance of integrating computer vision for effective fire monitoring and prevention, backed by a comprehensive dataset and robust training methodology.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

Real-Time Fire Hazard Monitoring with


Deep Learning
Sai Nivedha N.1; Dhamotharan R.2
M. Sc Data Science1; Assistant Professor2
1,2
Department of Computer Science and Information Technology
Kalasalingam Academy of Research and Education Krishnankoil, Tamil Nadu

Publication Date: 2025/03/15

Abstract: Fire outbreaks pose a significant threat to lives and property, making early detection crucial for minimizing
damage. Traditional fire detection methods often rely on manual monitoring or conventional image analysis techniques,
which can lead to delayed detection and lower accuracy. To address these challenges, this project implements an AI-powered
fire detection system using the yolo8 object detection model. The model has been trained on a dataset of 2,509 images, with
1,004 used for training, 754 for validation, and 751 for testing. The system processes video input in real time, detecting fire
and marking affected areas with a bounding box and confidence score. Detection details, including the timestamp, fire status,
and confidence level, are logged in a CSV file for record-keeping. Additionally, an automated alert system is integrated using
Twilio’s SMS service, which immediately notifies designated authorities upon fire detection. The model achieves a mean
Average Precision (mAP) of 91.3%, a precision of 90.3%, and a recall of 86.9%, demonstrating high reliability in identifying
fire incidents. With its ability to detect fire efficiently and provide real-time alerts, this system offers a scalable and effective
solution for fire monitoring and prevention.

Keywords: Computer Vision, Fire Detection, Image Processing, Twilio SMS Notification, Bounding Boxes Detection, Deep
Learning.

How to Cite: Sai Nivedha N.; Dhamotharan R. (2025). Real-Time Fire Hazard Monitoring with Deep Learning. International
Journal of Innovative Science and Research Technology, 10(2), 2060-2069. https://doi.org/10.38124/ijisrt/25feb1629.

I. INTRODUCTION By processing video input, the system detects fire,


highlights affected areas with bounding boxes, logs
Fire detection plays a vital role in ensuring safety and detection details, and sends automated alerts to authorities,
minimizing damage caused by fire hazards. Traditional fire ensuring a faster and more accurate response to potential fire
alarm systems primarily rely on smoke, heat, or radiation incidents.
sensors, which require fire particles to reach the sensor
before detection, leading to delays. Additionally, these II. THE PROPOSED SYSTEM
systems do not provide detailed information such as the
fire’s location, size, or intensity. With the widespread use of A. Object Detection Using Yolov8
surveillance cameras, integrating vision-based fire detection You Only Look Once (YOLO) is a real-time object
offers a more effective solution. detection system that processes an entire image in a single
pass through a neural network. The image is divided into
Unlike sensor-based systems, image-based detection multiple regions, and the model predicts bounding boxes
identifies fire visually in real time without waiting for smoke along with confidence scores for each detected object.
or heat to spread, significantly reducing response time. A
single camera placed at a strategic vantage point can cover Here, YOLOv8 is used due to its improved accuracy
large areas, enhancing efficiency compared to conventional and speed compared to earlier versions. The model is trained
sensors, which are more suited for confined spaces. on a custom fire detection dataset using Roboflow, where
2,509 images were labelled and pre-processed. Since
Additionally, vision-based systems allow authorities to commonly used datasets like Common Objects in Context
verify incidents through surveillance footage, reducing false COCO do not include fire detection classes, a custom dataset
alarms. This work focuses on developing a real-time fire was necessary. Roboflow 3.0 Object Detection (Fast) was
detection system using the YOLO object detection model, used as the model type, with COCO as a checkpoint to
leveraging deep learning and computer vision techniques. enhance learning.

IJISRT25FEB1629 www.ijisrt.com 2060


Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

The trained model, best.pt, detects fire in real-time by a confidence score above 0.5, it is considered a valid
drawing bounding boxes around affected areas with detection.
confidence scores. The final model achieves a mean Average
Precision (mAP) of 91.3%, a precision of 90.3%, and a recall After detecting fire, the system logs detection details
of 86.9%, ensuring efficient and accurate fire detection such as timestamp, fire status, and confidence score into a
suitable for real-time monitoring applications. CSV file for documentation. To enhance interpretability, a
bounding box is drawn around the detected fire region in the
III. YOLO v8 ALGORITHM video frame.

The process begins with preprocessing the input data, Additionally, an automated alert system is integrated
where each video frame undergoes auto orientation using Twilio’s SMS service. When fire is detected for the
correction to ensure proper alignment. Additionally, the first time, the system sends an SMS alert to designated
frames are resized to 640x640 pixels, maintaining a authorities, ensuring a quick response. To prevent redundant
consistent input size for the model, which helps in improving alerts, the system ensures that an SMS is sent only once per
detection accuracy and computational efficiency. fire detection event.

Once the frames are pre-processed, the YOLOv8 model The system operates in real-time, continuously
(best.pt) is loaded. Each video frame is passed through the analysing video frames until the video ends or a specified
model, which applies convolutional layers to extract features time limit is reached. Finally, the system releases the video
and uses bounding box regression to locate potential fire stream and closes all windows, completing the fire detection
regions. The model then classifies the detected objects and process. The inclusion of pre-processing techniques such as
assigns a confidence score. If the fire class is identified with auto-orientation correction and resizing further enhances
detection reliability and model performance.

Fig 1: YOLOv8 Algorithm

IV. TRAINING ALGORITHM / METHODOLOGY map is created, and the YOLOv8 model is configured with
appropriate parameters. The model is then trained, and key
This section outlines the methodology for detecting and metrics like loss and accuracy are monitored. After training,
localizing fire zones using the YOLOv8 model. The system the testing phase involves adjusting parameters and feeding
is designed to automatically identify fire patterns in an image test images into the model for prediction. The system
and accurately determine their location. The process begins analyses the images, detects fire zones, and generates
with collecting data from various sources, followed by bounding boxes around the identified areas. This structured
preprocessing steps such as resizing and annotation. Once approach ensures reliable detection, making it suitable for
the data is prepared, the training phase begins, where a label real-time fire monitoring applications.

IJISRT25FEB1629 www.ijisrt.com 2061


Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

Fig 2: Workflow Architecture Diagram for the Whole Detection Process

A. Image Acquisition diverse scenarios such as day and night fires, aerial views,
Image acquisition is a crucial step in developing an fixed-shot fires, surface fires, trunk fires, and canopy fires.
effective fire detection system. For this study, a total of 2,509 The collected images ensure a comprehensive representation
images were collected from various online sources, of different fire conditions, enhancing the robustness of the
including crime and emergency response websites, which detection model.
provide real-world fire incident images. The dataset includes

Table 1: Dataset Statistics

To maintain a balanced and structured dataset, the best.pt, was selected for further evaluation. This structured
images were divided into 1,004 for training, 754 for dataset and well-defined training process ensure high
validation, and 751 for testing. The YOLOv8 model was accuracy in detecting fire in real-time applications.
trained using this dataset, and the best-performing model,

Fig 3: Samples of Raw Image Data

IJISRT25FEB1629 www.ijisrt.com 2062


Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

B. Data Pre-Processing Additionally, flipping techniques were evaluated,


where images were randomly flipped to introduce positional
 Augmentation variations. This comparison allowed for an understanding of
Enhancing dataset diversity plays a key role in how different preprocessing methods influenced detection
improving the performance and adaptability of machine accuracy, ensuring a well-prepared dataset for fire
learning models. When working with images from various recognition.
sources, differences in size, resolution, and lighting
conditions can impact the training process. To address these  Image Resizing
variations, different preprocessing techniques were explored Standardizing image dimensions is essential for
and compared to assess their impact on model performance. maintaining consistency across the dataset and ensuring
optimal model performance. In this process, all images were
One approach involved crop and rotational resized to 640x640 pixels using a stretch-to-fit approach.
adjustments, particularly for images where labelling required This resizing method ensures that every image conforms to
alignment. Applying rotations within a range of -20 to +20 the required input dimensions while preserving important
degrees helped account for real-world variations in object visual details.
orientation, ensuring that the model could generalize better
to different perspectives. Additionally, auto-orientation was applied to correct
any discrepancies in image alignment due to variations in
Another aspect considered was brightness correction. camera angles or metadata inconsistencies. This step helps
Some images had lower visibility due to poor lighting maintain uniformity across the dataset, preventing potential
conditions, so adjustments ranging from +20% to -20% in distortions that could affect the model’s ability to accurately
brightness were tested. This step helped improve image detect fire in different scenarios.
clarity, making features more distinguishable for training.

Fig 4: Sample of Resized Image Data

 Image Labelling or Annotating Once the labelling process was completed, the dataset
Roboflow labelling software was used to select the underwent a structured data preprocessing workflow. The
appropriately scaled images. The fire regions in the dataset images were divided into three sets: 40% for training, 30%
were annotated using the bounding box tool and polygon for testing, and 30% for validation, ensuring a balanced
tool, ensuring precise labelling. The annotations were distribution for model development. After pre-processing,
automatically saved, and the export option generated a TXT the dataset was prepared for further data processing steps to
file containing detailed information about the marked fire enhance model performance and accuracy.
regions.

IJISRT25FEB1629 www.ijisrt.com 2063


Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

Fig 5: Flowchart for (a) Labelling the Resized Image Data and (b) Data Pre-Processing Steps

Fig 6: Samples of Labelled Image Data

C. Data Processing V. RESULTS AND DISCUSSION


In this data processing context, only one step is
considered for generating a TXT file. Considering that, a file Training the YOLOv8-based fire detection model
of plain text was created for exporting data easily and involved key hyperparameter tuning and optimization. The
importing in a structured manner. Then the processed data is model was trained for 300 epochs using Stochastic Gradient
set for model training steps. Descent (SGD) with a batch size of 16, learning rate of 0.01,
and weight decay of 0.001.

Fig 7: Graphical Representation of Model During Epochs

IJISRT25FEB1629 www.ijisrt.com 2064


Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

To prevent overfitting, an early stopping mechanism measure of computational complexity, indicating the number
was applied, halting training if no improvement was seen for of computations performed by a model. FPS stands for
50 consecutive epochs. The pre-trained YOLOv8 model frames per second, representing the rate at which frames are
helped in faster convergence. transmitted.

Performance evaluation using mAP@50 showed a The YOLOv8 model was evaluated for fire detection
steady increase, indicating better fire detection accuracy. The using Google Colab GPU. The model consists of 129 layers,
box loss, class loss, and object loss graphs showed a 11,135,987 parameters, and 11,135,971 gradients, with a
significant decline, confirming effective learning. GFLOP value of 28.6, ensuring efficient computation.

A. Model Evaluation Key performance metrics are as follows:


The evaluation metrics employed in this paper to assess
the model's performance included precision (P), recall (R), Table 2: Performance Metrics
average precision (AP), mean average precision (mAP), F1
score, parameters, floating point operations (FLOPs), and
frames per second (FPS). AP represents the area under the
precision-recall (PR) curve, while mAP signifies the average
AP across different categories. The formulas used for these
metrics are outlined as (1)-(3).

B. Analysis of Results
The results indicate a steady improvement in precision,
recall, and mean Average Precision (mAP) over training
epochs. The precision and recall curves show an upward
trend, suggesting that the model's ability to correctly detect
fire and smoke has improved. The loss plots for training and
True positive (TP) signifies accurate classification of a validation exhibit a consistent decline, which indicates
sample as positive, while false positive (FP) denotes effective learning and reduced error over time. The final
incorrect classification of a sample as positive. False mAP@50 score is high, reflecting strong object detection
negative (FN) signifies the misclassification of a sample as performance. However, the mAP@50-95 value is lower,
negative. 'n' represents the count of categories. FLOPs are a implying that performance varies across different thresholds.

Fig 8: YOLO v8 based Epoch Value

IJISRT25FEB1629 www.ijisrt.com 2065


Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

Fig 9: YOLO v8 based Training Graph with 10 Epochs

 Correlation Matrix YOLOv7, YOLOv5, MobileNet-v2, and ResNet-32. The


A confusion matrix is a crucial tool in machine learning model’s performance is analysed across different training
used to assess the effectiveness of a classification model. It iterations, with metrics such as mean Average Precision
provides insights into the model's strengths and weaknesses ([email protected]) being monitored to gauge learning
by evaluating precision, recall, and overall accuracy. In this effectiveness. A higher [email protected] value indicates superior
case, the matrix defines two key categories: fire and learning and model efficiency.
background. True Positives (TP) represent correctly
identified instances, whereas False Positives (FP) indicate Additionally, the F1-score, computed using a structured
misclassifications. formula, demonstrates that YOLOv8 consistently surpasses
other models. It achieves an F1-score of 60% and a
The model accurately detects fire in 70% of cases, [email protected] value of 57.3%. Further analysis of model
while 30% of instances are incorrectly classified as complexity highlights YOLOv8’s advantage over YOLOv7,
background. These values, ranging between 0.01 and 0.70, which has the highest number of trainable parameters,
offer deeper insights into confidence levels for each potentially reducing its ability to generalize effectively.
classification. The findings highlight areas that require
improvement, particularly in minimizing false negatives to Moreover, an evaluation of classification accuracy at
enhance detection accuracy. the 85th iteration indicates an overall performance of
90.45%. The analysis reveals a strong detection capability
Examining the training performance, the optimal for fire, achieving 70% accuracy. Despite some classification
results are observed between the 82nd and 85th iterations. challenges, YOLOv8 proves to be a promising solution for
The evaluation of fire detection using the YOLOv8 model is real-time fire detection across various scenarios.
compared against other object detection models, including

Fig 10: Correlation Matrix of Metrics

IJISRT25FEB1629 www.ijisrt.com 2066


Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

Table 3: Testing Execution of YOLOv7, YOLOv5, MobileNetv2 and ResNet-32

C. Visualization consistently outperformed others in fire detection tasks.


The model underwent training for 100 epochs, with Specifically, YOLOv8 achieved detection accuracies of 90%
each epoch representing a full pass through the training and 51%, whereas models such as YOLOv7, ResNet-32, and
dataset. Throughout this process, the model’s parameters MobileNet-v2 failed to detect fire in certain cases.
were continuously updated based on computed losses and
gradients. Training was concluded at 85 epochs, as optimal Further assessment demonstrated that YOLOv8
results were observed around the 80th step, in line with the consistently provided superior detection accuracy compared
predefined configuration settings. Early stopping was not to YOLOv7, YOLOv5, ResNet-32, and MobileNet-v2. For
activated since the criteria for halting training were not met instance, when evaluating multiple images, the detection
within the specified 85 epochs. The entire training process rates for YOLOv8 reached 79%, whereas YOLOv7,
took approximately three hours, though this duration could YOLOv5, ResNet-32, and MobileNet-v2 recorded lower
vary depending on computational resources and hardware accuracies of 60%, 47%, 27%, and 49%, respectively.
capabilities, highlighting the resource-intensive nature of Overall, YOLOv8 proved to be the most effective model for
deep learning training. fire detection, outperforming other architectures trained on
the same dataset. While training across different epochs
Performance peaked at 85 epochs before showing signs showed minor improvements, the YOLOv8 model trained
of decline, a typical case of overfitting where the model for 85 epochs delivered the best performance across all
becomes too specialized in the training data and loses its evaluation metrics. Beyond this point, extending the training
ability to generalize effectively to new inputs. A comparative to 100 or 150 epochs led to a decline in accuracy due to
analysis of various models revealed that YOLOv8 overfitting.

Fig 11: Sample Detected Images

D. Real Time Fire Detection Upon detection, the system sends an alert message to a
The real-time fire detection system processes incoming designated person, ensuring quick response. Additionally, a
images and detects fire with a confidence score using deep log entry is created, recording the timestamp of the fire event
learning techniques. It identifies fire regions by drawing along with the confidence score, providing a record for
bounding boxes around them and assigns a confidence value. future analysis and verification.

IJISRT25FEB1629 www.ijisrt.com 2067


Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

Fig 12: Real Time Fire Detection with the Predictions

Fig 13: Sample of Saved Log after Completing the Process

Fig 14: Text Message of Fire Detection via SMS

Fig 15: Process Completion Acknowledgement Text

VI. CONCLUSION using Twilio ensures immediate notification to designated


authorities upon fire detection.
Fire outbreaks present a major risk to lives and
property, necessitating an efficient and reliable detection A comparative analysis was conducted with YOLOv7,
system. This project successfully developed an AI-powered YOLOv5, ResNet-32, and MobileNet v2 to evaluate the
fire detection system using the YOLOv8 object detection effectiveness of the proposed model. While other models
model, which significantly outperforms traditional methods struggled with lower detection accuracy and inconsistencies,
in terms of speed and accuracy. The model was trained on a YOLOv8 demonstrated superior performance with a mean
dataset of 2,509 images, split into training, validation, and Average Precision (mAP) of 91.3%, a precision of 90.3%,
testing sets, ensuring robust learning and generalization. and a recall of 86.9%. The model's training peaked at 85
Unlike conventional image analysis techniques, which may epochs, where it achieved optimal detection capability
result in delayed detection, this system processes video input without overfitting. These results highlight YOLOv8’s
in real time, marking fire-affected areas with bounding boxes efficiency in real-time fire detection across various
and confidence scores. Additionally, a CSV-based logging scenarios, making it a practical solution for fire monitoring
system records critical details such as timestamps, fire status, and prevention. Future improvements could focus on
and confidence levels, while an automated SMS alert system optimizing the model for edge devices and enhancing its

IJISRT25FEB1629 www.ijisrt.com 2068


Volume 10, Issue 2, February – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25feb1629

robustness under varying environmental conditions to ensure


wider applicability and deployment.

REFERENCES

[1]. S. Rahman, J. H. Rony, J. Uddin, and M. A. Samad,


“Real-time obstacle detection with YOLOv8 in a
WSN using UAV aerial photography,” J. Imaging,
vol. 9, no. 10, p. 216, Oct. 2023, doi:
10.3390/jimaging9100216.
[2]. R. Siddiqua, S. Rahman, and J. Uddin, “A deep
learning-based dengue mosquito detection method
using faster R-CNN and image processing
techniques,” Ann. Emerg. Technol. Comput., vol. 5,
no. 3, pp. 11–23, Jul. 2021, doi:
10.33166/AETiC.2021.03.002.
[3]. S. B. Hasan, S. Rahman, M. Khaliluzzaman, and S.
Ahmed, “Smoke detection from different
environmental conditions using faster R-CNN
approach based on deep neural network,” in Lecture
Notes of the Institute for Computer Sciences, Social
Informatics and Telecommunications Engineering,
LNICST, vol. 325 LNICST, 2020, pp. 705–717. doi:
10.1007/978-3-030 52856-0_56.
[4]. P. D. K. He, G. Gkioxari and R. Girshick, “Mask R-
CNN,” in IEEE international conference on computer
vision, IEEE, 2017, pp. 2961–2969.
[5]. Khandaker, and J. Uddin, “Computer vision-based
early fire detection using enhanced chromatic
segmentation and optical flow analysis technique,”
Int. Arab J. Inf. Technol., vol. 17, no. 6, pp. 947–953,
Nov. 2020, doi: 10.34028/iajit/17/6/13.
[6]. R. A. Khan, J. Uddin, and S. Corraya, “Real-time fire
detection using enhanced color segmentation and
novel foreground extraction,” in 4th International
Conference on Advances in Electrical Engineering,
ICAEE 2017, IEEE, Sep. 2017, pp. 488–493. doi:
10.1109/ICAEE.2017.8255405.
[7]. R. A. Khan, J. Uddin, S. Corraya, and J.-M. Kim,
“Machine vision-based indoor fire detection using
static and dynamic features,” Int. J. Control Autom.,
vol. 11, no. 6, 2018, doi: 10.14257/ijca.2018.11.6.09.
[8]. H. Zheng, J. Duan, Y. Dong, and Y. Liu, “Real-time
fire detection algorithms running on small embedded
devices based on MobileNetv3 and YOLOv4,” Fire
Ecol., vol. 19, no. 1, p. 31, May 2023, doi:
10.1186/s42408-023-00189-0.
[9]. H. Du, W. Zhu, K. Peng, and W. Li, “Improved high
speed flame detection method based on YOLOv7,”
Open J. Appl. Sci., vol. 12, no. 12, pp. 2004–2018,
2022, doi: 10.4236/ojapps.2022.1212140.
[10]. S. N. Saydirasulovich, M. Mukhiddinov, O. Djuraev,
A. Abdusalomov, and Y. I. Cho, “An improved
wildfire smoke detection based on YOLOv8 and
UAV images,” Sensors (Basel)., vol. 23, no. 20, 2023,
doi: 10.3390/s23208374.

IJISRT25FEB1629 www.ijisrt.com 2069

You might also like