Damage Identification of Selected Car Parts Using Image Classification and Deep Learning
Damage Identification of Selected Car Parts Using Image Classification and Deep Learning
Abraham C. Chua, Christian Rei B. Mercado, John Phillip R. Pin, Angelo Kyle T. Tan,
2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM) | 978-1-6654-0167-8/21/$31.00 ©2021 IEEE | DOI: 10.1109/HNICEM54116.2021.9731806
Abstract — This study presents the use of image classification and convolutional neural network. Three common car parts were
deep learning in the field of insurance claims and management for considered which includes front bumper, rear bumper, and car
the identification and assessment of damaged vehicle parts. wheels machine. The image dataset was taken from Google
Vehicular insurance claims on require appraisers to decide the images.
damage of the vehicles. A two-level machine learning-based system
was developed to classify different car parts (front bumper, rear
II. REVIEW OF RELATED LITERATURE
bumper, and car wheels), and to detect the presence of any
damages. The image dataset used in the study was obtained from
a Google image. This dataset is used for training and validation of
A. Machine Learning in Automotive Insurance Process and
the convolutional neural network (CNN) model. The first model Management
yields a training accuracy of 94.84% and validation accuracy of
81.25% for car parts classification. The second model yields a The have been demands to find ways to speed up the auto
training accuracy of 97.16% and validation accuracy of 49.28% insurance process. One of the solutions that was proposed is by
for damage identification. prompting customers to upload pictures of their vehicle’s
damages using their mobile phones. The pictures can be viewed
Keywords — convolutional neural networks, damage evaluation, by insurance agents to investigate the damages and process
image classification, insurance claims, machine learning
their insurance claims as fast as possible. This system helped in
speeding up the car insurance claim process, but some people
I. INTRODUCTION discovered a flaw in the system and take advantage of the
situation by submitting multiple copies of a supposedly single
Automobile insurance processes typically require the
insurance case which results in huge losses to insurance
performance and verdict of an appraiser. These appraisers must
companies [2]. The application and integration of both machine
be objective with regards to how they determine the amount of
learning and machine vision to the automotive insurance
damage that can be noticed in the externals of the vehicle to
industry can greatly decrease the insurance leakage caused by
assess the cost to repair the certain part damaged. According to
human error and fraud. In one study, the Moiré effect detection
data presented by the Beijing Bureau of China Insurance
was used to detect and classify images to determine whether the
Regulatory Commission, about 10 percent of all insurance
picture was downloaded from the internet to help safeguard the
claims for vehicles are insurance frauds one way or another
insurance company against individuals who may try to cheat the
which are massive threats not just to the insurance company,
system [3].
but also to the entire industry [1]. To reduce the amount of
human error present in the processes done in performing
vehicle insurance-related occupations, this study developed a The Mask Region-Based Convolutional Neural Networks
machine learning model which can detect and classify the car (R-CNN), PANet, and a combination of the two with VGG16
parts and the presence of any damages. This process can fast network was used to detect and classify the different car parts
track the damage appraisal and eliminate the possibility of bias [4]. In other studies, feature representation techniques were
during the assessment. The machine learning model uses image used for pattern recognition. This processing techniques use
classification which allows it to study the relationship of pixels raw data to discover patterns needed for detection [5]. Other
in the image so that it can find distinct similarities and techniques that used Move Learning, and Outfit Learning
differences which can be used for predicting a certain outcome. exhibited a significant result in the issue of vehicle harm
In this study, the machine learning model used was a characterization [6]. The utilization of different types of deep
learning concepts were also helpful in the advancement of
Authorized licensed use limited to: Pusan National University Library. Downloaded on September 12,2023 at 13:40:50 UTC from IEEE Xplore. Restrictions apply.
image-based detection, classification, and recognition The data that was used in the dataset was collected from Google
problems. One example is the Convolutional Neural Networks images.
(CNN) which can be used for domain-specific pre-training and
fine-tuning [7][8][9]. The use of Auto-encoders and other
unsupervised pre-training techniques on datasets with small
datasets have been shown to increase the performance of the
classifier. While most of the supervised pre-training techniques
require bigger datasets. A combination of transfer and ensemble
training would yield a more accurate deep learning code.
The image dataset was collected through a local Car Wheel 161 53 10
automotive dealer. The car parts classification will include
three parts, namely front bumper, car wheel, and rear bumper.
Authorized licensed use limited to: Pusan National University Library. Downloaded on September 12,2023 at 13:40:50 UTC from IEEE Xplore. Restrictions apply.
Table II. Description of Data Set for damage classification models. The first model is tasked in identifying which part of
the car is being shown. The first model used dataset 1 which
Classes Train Size Validation Test includes 3 classifications. CNN Model 1 is the first level
detection of the system. The second model is tasked in
Undamaged Front
Bumper
179 95 5 determining whether the specific car part is damaged or not
damaged. The second model used dataset 2 which includes 6
Undamaged Rear
206 43 5
classifications. CNN Model 2 is the second level detection of
Bumper the system.
Undamaged Car
224 118 5 E. Performance Evaluation
Wheel
! "
= (2)
! " #$ ! "
C. Training a CNN
Training the CNN started with random initialization. For
both datasets, the CNN model building consisted of 6 layers: % && =
! "
(3)
Convolution2 - Pooling2 - Convolution2 - Pooling2 - ! " #$ ' "
Convolution2 - Pooling2 - Convolution2 - Pooling2 - RELU -
Softmax. The first two convolution layers have 32 channels
∙
with size of 3 x 3. All convolutional layers have a size of 3 x 3, (1 − =2∙ (4)
#
32 channels for the first two, then 64 channels for the
remaining. Each pooling layer has a pool size of 2 x 2. Dataset III. RESULTS AND DISCUSSION
for parts classification has parameters of 75 epochs, a batch size
of 16, img_width, img_height = 175, 175 and activation The CNN model training, validation, and testing results are
functions softmax and ReLU (Rectified Linear Unit). The presented here. It includes the performance metrics associated
dataset for damage classification has parameters of 150 epochs, with the generated CNN models. Table III shows the values of
a batch size of 32, img_width, img_height = 75, 75. The dataset different metrics for car parts dataset classification. The model
scale was set to 1x with 255 degrees rotation, shear-range of yields a training accuracy of 94.84% using 585 images, and
0.1, zoom-range of 0.1, and with horizontal-flip validation accuracy of 81.25% using 197 images. From these
results, the performance of the model is enough to be used on
D. Experimental Setup
prediction. Prediction accuracy for rear bumper is 60%, car
wheel is 90%, and front bumper is 80%. The model yields 100%
for precision, recall, and F1-score metrics in all classes. The
60% accuracy for rear bumper is mainly due to the images of
rear bumper and front bumper looking similar.
Authorized licensed use limited to: Pusan National University Library. Downloaded on September 12,2023 at 13:40:50 UTC from IEEE Xplore. Restrictions apply.
V. CONCLUSION
REFERENCES
[2] P. Li, B. Shen, W. Dong. “An Anti-fraud System for Car Insurance
Claim Based on Visual Evidence“ Computer Science and Pattern
Recognition. pp. 1-6. Apr. 2018. Doi: arXiv:1804.11207
Authorized licensed use limited to: Pusan National University Library. Downloaded on September 12,2023 at 13:40:50 UTC from IEEE Xplore. Restrictions apply.
Information Technology,Communication and Control, 2011 International Symposium on Image and Data Fusion,
Environment and Management (HNICEM), 2018, pp. 1-6, doi: Tengchong, Yunnan, pp. 1-4, 2011. doi:
10.1109/HNICEM.2018.8666357. 10.1109/ISIDF.2011.6024246.
[6] P. Sarath, M. Soorya, S. A. Rahman, A., K. S. Suresh, & K. Devaki, [17] A. K. Brillantes et al., "Philippine License Plate Detection and
“Assessing Car Damage using Mask R-CNN.” International Classification using Faster R-CNN and Feature Pyramid Network,"
Journal of Engineering and Advanced Technology (IJEAT), 9, pp. 2019 IEEE 11th International Conference on Humanoid,
1-4, Feb.2020. doi:10.35940/ijeat.C5302.029320 Nanotechnology, Information Technology, Communication and
Control, Environment, and Management ( HNICEM ), 2019, pp. 1-
[7] K. Patil, M. Kulkarni, A. Sriraman and S. Karande, "Deep Learning 5, doi: 10.1109/HNICEM48295.2019.9072754.
Based Car Damage Classification," 2017 16th IEEE International
Conference on Machine Learning and Applications (ICMLA),
Cancun, pp. 50-54,2017. doi: 10.1109/ICMLA.2017.0-179.
[9] Billones R.K.C., Bandala A.A., Gan Lim L.A., Sybingco E., Fillone
A.M., Dadios E.P. (2020) Visual Percepts Quality Recognition
Using Convolutional Neural Networks. In: Arai K., Kapoor
S. (eds) Advances in Computer Vision. CVC 2019. Advances in
Intelligent Systems and Computing, vol 944. Springer, Cham.
https://doi-org.dlsu.idm.oclc.org/10.1007/978-3-030-17798-0_52
Authorized licensed use limited to: Pusan National University Library. Downloaded on September 12,2023 at 13:40:50 UTC from IEEE Xplore. Restrictions apply.