0% found this document useful (0 votes)
14 views

Damage Detection Using YOLOv8 AI for Vehicle Assessment

This paper presents an automated vehicle damage detection system using YOLOv8, a deep learning model that improves accuracy and efficiency over traditional manual inspection methods. The study demonstrates YOLOv8's capability to classify various types of vehicle damage, making it suitable for applications in insurance and fleet management. The findings indicate that this AI-driven approach significantly enhances the consistency and speed of vehicle assessments.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Damage Detection Using YOLOv8 AI for Vehicle Assessment

This paper presents an automated vehicle damage detection system using YOLOv8, a deep learning model that improves accuracy and efficiency over traditional manual inspection methods. The study demonstrates YOLOv8's capability to classify various types of vehicle damage, making it suitable for applications in insurance and fleet management. The findings indicate that this AI-driven approach significantly enhances the consistency and speed of vehicle assessments.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Volume 10, Issue 3, March – 2025 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar567

Damage Detection Using YOLOv8 AI for


Vehicle Assessment
Vanathi. B.1; Akshaya. R.2; Alfina. P.3; Gayathri. V.4; Lekhasri. S.5
1,2,3,4,5
Department of Computer Science and Engineering, SRM Valliammai Engineering College,
Kattankulathur Chengalpattu, India

Publication Date: 2025/03/26

Abstract: Vehicle damage detection is an essential task in automotive assessment, insurance claim processing, and fleet
management. Traditional methods involve manual inspection, which is time-consuming and prone to errors. This paper
presents an automated damage detection approach utilizing YOLOv8 (You Only Look Once version 8), a state-of-the-art
deep learning model for object detection. Our methodology involves training the model on a dataset comprising images of
vehicles with and without damage, using supervised learning techniques. The model achieves high detection accuracy and
efficiency, making it suitable for real-world applications. This study compares YOLOv8 with previous versions and
alternative models to highlight improvements in speed and precision. The findings suggest that this approach can
significantly enhance vehicle assessment processes, reducing human effort and improving consistency in damage evaluation.

Keywords: YOLOv8, Damage Detection, Vehicle Assessment, AI, Deep Learning, Computer Vision.

How to Cite: Vanathi. B.; Akshaya. R.; Alfina. P.; Gayathri. V.; Lekhasri. S. (2025). Damage Detection Using YOLOv8 AI for
Vehicle Assessment. International Journal of Innovative Science and Research Technology,
10(3), 982-987. https://doi.org/10.38124/ijisrt/25mar567.

I. INTRODUCTION  Deploy the model in practical applications such as


insurance processing and car rentals.
With vehicle sharing initiatives at the rise, the relevance
of insurance management increases. The growth of II. RELATED WORK
commercial car sharing, peer-to-peer sharing, and home
delivery results in a higher number of drivers per vehicle. Several deep learning models have been explored for
vehicle damage detection. Early studies utilized CNN-based
 Background architectures, but they lacked real-time processing
Vehicle damage assessment is critical in the automotive capabilities. With the emergence of YOLO models, real-time
industry, particularly in insurance claims, car rentals, and damage detection has become feasible. YOLOv5 and Faster
fleet management. Traditional methods rely on human R-CNN have been used in prior research, achieving
inspectors who visually assess damage, which can lead to significant improvements in accuracy. However, YOLOv8
subjectivity, inconsistency, and delays. The introduction of introduces enhanced detection speed and precision, making it
artificial intelligence (AI) and deep learning into vehicle an ideal choice for this application.
assessment has the potential to automate and streamline this
process. A. Traditional Damage Detection Methods

 Problem Statement  Manual Inspection: Experts visually inspect and report


Manual vehicle inspection is labor-intensive and often damage, leading to inconsistencies.
inaccurate due to human error. The lack of a standardized,  Traditional Image Processing: Edge detection,
automated approach leads to inconsistent evaluations. This thresholding, and contour analysis have been used, but
paper proposes an AI-driven solution using YOLOv8, an these techniques are sensitive to lighting and background
advanced object detection model, to automate vehicle damage variations.
detection.
B. AI in Damage Detection
 Objectives
 Deep Learning Models: Convolutional Neural Networks
 Develop a YOLOv8-based system for detecting and (CNNs) have been widely used for object detection.
classifying vehicle damage.
 Improve accuracy and efficiency in damage assessment.

IJISRT25MAR567 www.ijisrt.com 982


Volume 10, Issue 3, March – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar567

 Previous YOLO Versions: Earlier YOLO versions  Damaged Hood


(YOLOv3, YOLOv4, and YOLOv5) have been applied in  Damaged Bumper
damage detection but faced challenges in balancing  Damaged Windshield
accuracy and speed.
 Purpose of Dataset:
C. Advancements in YOLOv8 This dataset is crucial for training a multi-class
YOLOv8 introduces significant improvements over classification model for car damage detection. The numerical
previous versions, including: labels help in properly annotating images, configuring the
model's output layer, and interpreting model predictions
 Higher detection accuracy. during testing.
 Better real-time performance.
 Improved feature extraction for small object detection. B. Annotation and Labeling
Accurate labeling is essential for training a YOLOv8
D. Evolution of Object Detection Models model, as it relies on precise bounding boxes for object
detection. The following annotation tools and techniques
 Faster R-CNN: High accuracy but computationally were used:
expensive.
 SSD (Single Shot Detector): Balances speed and  Tools Used for Annotation
accuracy but struggles with small object detection.
 YOLO Family: Real-time object detection with  Bounding Box Annotation Tool: Used to mark the
significant improvements over time. location of vehicle damages.
 Polygon Annotation Tool: Helps in accurately labeling
E. Applications of YOLO in Vehicle Damage Detection irregularly shaped damages.
 Smart Polygon: AI-assisted labeling for faster
 Insurance Claims Processing: Automating vehicle annotation.
damage estimation to expedite claim settlements.  Label Assist: Auto-labeling for efficiency in large
 Fleet Management: Large-scale vehicle condition datasets.
monitoring for rental and logistics companies.  Zoom Tool: Allows for detailed annotation of small
 Car Rentals & Leasing: Ensuring damage assessment damages.
before and after rental periods.
 Annotation Process
F. Challenges in AI-Based Damage Detection
 Drag and Select:
 Dataset Limitations: Need for large, well-annotated
datasets across diverse vehicle types and lighting  Used to edit and adjust existing bounding boxes.
conditions.  Allows moving and resizing annotations as needed.
 False Positives & Negatives: Incorrect classification due
to reflections, dirt, or scratches.  Bounding Box Drawing:
 Computational Requirements: Deployment on edge
devices requires optimization for real-time processing.  Crosshairs assist in drawing precise bounding boxes.
 Each annotation is labeled with the appropriate class.
III. PROPOSED MODEL
 Final Verification:
A. Data Collection
 Annotated images are reviewed to ensure accuracy before
 Dataset Size: training.

 Total Images: 485 C. Model Training


 Train Split: 80% (388 images) To train YOLOv8 for car damage detection, we
 Test Split: 20% (97 images) followed a systematic approach involving dataset
preparation, model configuration, training, and
 Number of Classes (nc): monitoring.
The dataset comprises eight distinct damage classes,
each assigned a unique label:  Data Preprocessing

 Damaged Door  Image Resizing: Standardized image dimensions for


 Damaged Window consistency.
 Damaged Headlight  Normalization: Pixel values scaled for faster
 Damaged Mirror convergence.
 Dent

IJISRT25MAR567 www.ijisrt.com 983


Volume 10, Issue 3, March – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar567

 Augmentation: Random flipping, rotation, and  TP (True Positive): Correctly detected damaged vehicle
brightness adjustment to improve generalization. parts.
 TN (True Negative): Correctly detected undamaged
 Training Setup vehicle parts.
 FP (False Positive): Wrongly classified undamaged
 Framework: PyTorch-based YOLOv8 implementation. parts as damaged.
 Hardware: Trained on NVIDIA GPU (Tesla T4/RTX  FN (False Negative): Missed actual damages.
3090) for high-speed computation.
 Optimizer: Adam with learning rate decay.  Precision, Recall, and F1-Score
 Batch Size: 16 per training iteration.
 Loss Function: Intersection over Union (IoU) for  Precision: Measures how many of the detected damages
bounding box optimization. are actually correct. Precision=TPTP+FPPrecision =
\frac{TP}{TP + FP}Precision=TP+FPTP
 Training Process  Recall: Measures how many actual damages were
detected by the model. Recall=TPTP+FNRecall =
 Load annotated dataset in YOLOv8 format. \frac{TP}{TP + FN}Recall=TP+FNTP
 Train the model using transfer learning.  F1-score: A balance between precision and recall.
F1=2×Precision×RecallPrecision+RecallF1 = 2 \times
 Monitor performance using real-time metrics.
\frac{Precision \times Recall}{Precision +
 Adjust hyperparameters if necessary. Recall}F1=2×Precision+RecallPrecision×Recall.
 Monitoring with Weights & Biases (W&B)
 Model Validation and Error Analysis
 Tracking Model Progress:  Overfitting Check: Ensured training and testing
accuracy are similar.
 Saves training runs, hyperparameters, and evaluation  Misclassification Cases: Analyzed errors to improve
metrics. labeling and model robustness.
 Helps in visualizing loss curves, precision-recall graphs,  Dataset Bias Detection: Ensured diverse training images
and mAP. to prevent overfitting to specific vehicle types.
 Logging Sample Predictions: E. Model Deployment and Prediction
 Stores images with bounding boxes to assess the quality  GUI Development for User Interaction
of detections. To make the model user-friendly, a Graphical User
Interface (GUI) was created using Tkinter for easy
D. Model Testing interaction.
Once training was complete, the model was evaluated
using a separate test set to ensure generalization.  Features of the GUI:
 Performance Metrics  Upload Image: Allows users to upload vehicle images.
 Damage Detection Button: Triggers YOLOv8 inference.
 Accuracy Plot  Display Results: Shows detected damage areas with
bounding boxes.
 Measures how well the model correctly classifies
damaged and undamaged areas.  Real-Time Damage Detection Process
 Tracks model performance over training epochs.
 User uploads an image.
 Loss Plot
 Model processes the image and runs inference.
 Detected damages are highlighted with bounding boxes.
 Indicates how well the model minimizes error during
 Results are displayed on the GUI.
training.
 Ensures that the model is not overfitting.
IV. DISCUSSION: RESULTS ANALYSIS
 Evaluation Metrics Used
 Example Detections
Visual results demonstrate the model’s effectiveness in
 Classification Accuracy
detecting various types of vehicle damage, such as:
 Measures correct predictions out of total samples.
 Dents
 Formula: Accuracy=TP+TNTP+TN+FP+FNAccuracy =
\frac{TP + TN}{TP + TN + FP + FN}  Scratches
Accuracy=TP+TN+FP+FNTP+TN  Cracks

IJISRT25MAR567 www.ijisrt.com 984


Volume 10, Issue 3, March – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar567

 Broken parts  Generalization Issues: Performance may degrade in poor


lighting conditions.
 Limitations  Dataset Bias: Needs more diverse training data for better
robustness.
 False Positives: Minor reflections sometimes
misclassified as damage.

Fig 1: Analyzed Accuracy Result

Fig 2: f1 Score, Precision and Recall Graph

Fig 3: f1 Confidence

IJISRT25MAR567 www.ijisrt.com 985


Volume 10, Issue 3, March – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar567

Fig 4: Confusion Matrix without Normalization

Fig 5: Labels

IJISRT25MAR567 www.ijisrt.com 986


Volume 10, Issue 3, March – 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar567

Fig 6: Labels Correlogram

V. CONCLUSION [5]. P. Rostami, A. Taimori, S. Sattarzadeh, H. O.


Shahreza and F. Marvasti, "An Image Dataset of
This paper demonstrates that YOLOv8 provides an Vehicles Front Views and Parts for Vehicle Detection
efficient and accurate solution for vehicle damage detection. Localization and Alignment Applications", 2020 10th
The model outperforms traditional inspection methods, International Symposium on Telecommunications
offering real-time, objective assessments for insurance and (IST), pp. 25-30, 2020.
automotive industries. [6]. César Suescún, Paula Useche Murillo and Robinson
Moreno, "Scratch Detection in Cars Using a
FUTURE WORK Convolutional Neural Network by Means of Transfer
Learning", International Journal of Applied
 Expanding the dataset with diverse damage types and Engineering Research, vol. 13, pp. 12976-12982,
lighting conditions. 2018.
 Enhancing the model to classify damage severity (minor, [7]. H. Bandi, S. Joshi, S. Bhagat and A. Deshpande,
moderate, severe). "Assessing Car Damage with Convolutional Neural
 Deploying the model on mobile applications for on-the- Networks", 2021 International Conference on
go vehicle assessment. Communication information and Computing
Technology (ICCICT), pp. 1-5, 2021.
REFERENCES [8]. Aniket Gupta, Jitesh Chogale, Shashank Shrivastav
and Rupali Nikhare, "Automatic Car Insurance using
[1]. Q. Zhang, X. Chang and S. B. Bian, "Vehicle- Image Analysis", International Research Journal of
Damage-Detection Segmentation Algorithm Based on Engineering and Technology (IRJET), vol. 07, no. 04,
Improved Mask RCNN", IEEE Access, vol. 8, pp. Apr 2020, ISSN 2395-0056.
6997-7004, 2020. [9]. R. Girshick, J. Donahue, T. Darrell and J. Malik, "Rich
[2]. Phyu Kyu and Kuntpong Woraratpanya, Car Damage Feature Hierarchies for Accurate Object Detection and
Detection and Classification, pp. 1-6, 2020. Semantic Segmentation", 2014 IEEE Conference on
[3]. Mahavir Dwivedi, Malik Hashmat, Omkar Shadab, S Computer Vision and Pattern Recognition, pp. 580-
N. Omkar, Edgar Bosco, Bharat Monis, et al., Deep 587, 2014.
Learning Based Car Damage Classification and
Detection, 2019.
[4]. K. Patil, M. Kulkarni, A. Sriraman and S. Karande,
"Deep Learning Based Car Damage
Classification", 2017 16th IEEE International
Conference on Machine Learning and Applications
(ICMLA), pp. 50-54, 2017.

IJISRT25MAR567 www.ijisrt.com 987

You might also like