Damage Detection Using YOLOv8 AI for Vehicle Assessment
Damage Detection Using YOLOv8 AI for Vehicle Assessment
Abstract: Vehicle damage detection is an essential task in automotive assessment, insurance claim processing, and fleet
management. Traditional methods involve manual inspection, which is time-consuming and prone to errors. This paper
presents an automated damage detection approach utilizing YOLOv8 (You Only Look Once version 8), a state-of-the-art
deep learning model for object detection. Our methodology involves training the model on a dataset comprising images of
vehicles with and without damage, using supervised learning techniques. The model achieves high detection accuracy and
efficiency, making it suitable for real-world applications. This study compares YOLOv8 with previous versions and
alternative models to highlight improvements in speed and precision. The findings suggest that this approach can
significantly enhance vehicle assessment processes, reducing human effort and improving consistency in damage evaluation.
Keywords: YOLOv8, Damage Detection, Vehicle Assessment, AI, Deep Learning, Computer Vision.
How to Cite: Vanathi. B.; Akshaya. R.; Alfina. P.; Gayathri. V.; Lekhasri. S. (2025). Damage Detection Using YOLOv8 AI for
Vehicle Assessment. International Journal of Innovative Science and Research Technology,
10(3), 982-987. https://doi.org/10.38124/ijisrt/25mar567.
Augmentation: Random flipping, rotation, and TP (True Positive): Correctly detected damaged vehicle
brightness adjustment to improve generalization. parts.
TN (True Negative): Correctly detected undamaged
Training Setup vehicle parts.
FP (False Positive): Wrongly classified undamaged
Framework: PyTorch-based YOLOv8 implementation. parts as damaged.
Hardware: Trained on NVIDIA GPU (Tesla T4/RTX FN (False Negative): Missed actual damages.
3090) for high-speed computation.
Optimizer: Adam with learning rate decay. Precision, Recall, and F1-Score
Batch Size: 16 per training iteration.
Loss Function: Intersection over Union (IoU) for Precision: Measures how many of the detected damages
bounding box optimization. are actually correct. Precision=TPTP+FPPrecision =
\frac{TP}{TP + FP}Precision=TP+FPTP
Training Process Recall: Measures how many actual damages were
detected by the model. Recall=TPTP+FNRecall =
Load annotated dataset in YOLOv8 format. \frac{TP}{TP + FN}Recall=TP+FNTP
Train the model using transfer learning. F1-score: A balance between precision and recall.
F1=2×Precision×RecallPrecision+RecallF1 = 2 \times
Monitor performance using real-time metrics.
\frac{Precision \times Recall}{Precision +
Adjust hyperparameters if necessary. Recall}F1=2×Precision+RecallPrecision×Recall.
Monitoring with Weights & Biases (W&B)
Model Validation and Error Analysis
Tracking Model Progress: Overfitting Check: Ensured training and testing
accuracy are similar.
Saves training runs, hyperparameters, and evaluation Misclassification Cases: Analyzed errors to improve
metrics. labeling and model robustness.
Helps in visualizing loss curves, precision-recall graphs, Dataset Bias Detection: Ensured diverse training images
and mAP. to prevent overfitting to specific vehicle types.
Logging Sample Predictions: E. Model Deployment and Prediction
Stores images with bounding boxes to assess the quality GUI Development for User Interaction
of detections. To make the model user-friendly, a Graphical User
Interface (GUI) was created using Tkinter for easy
D. Model Testing interaction.
Once training was complete, the model was evaluated
using a separate test set to ensure generalization. Features of the GUI:
Performance Metrics Upload Image: Allows users to upload vehicle images.
Damage Detection Button: Triggers YOLOv8 inference.
Accuracy Plot Display Results: Shows detected damage areas with
bounding boxes.
Measures how well the model correctly classifies
damaged and undamaged areas. Real-Time Damage Detection Process
Tracks model performance over training epochs.
User uploads an image.
Loss Plot
Model processes the image and runs inference.
Detected damages are highlighted with bounding boxes.
Indicates how well the model minimizes error during
Results are displayed on the GUI.
training.
Ensures that the model is not overfitting.
IV. DISCUSSION: RESULTS ANALYSIS
Evaluation Metrics Used
Example Detections
Visual results demonstrate the model’s effectiveness in
Classification Accuracy
detecting various types of vehicle damage, such as:
Measures correct predictions out of total samples.
Dents
Formula: Accuracy=TP+TNTP+TN+FP+FNAccuracy =
\frac{TP + TN}{TP + TN + FP + FN} Scratches
Accuracy=TP+TN+FP+FNTP+TN Cracks
Fig 3: f1 Confidence
Fig 5: Labels