0% found this document useful (0 votes)
49 views

Vision Based Feature Diagnosis For Automobile Instrument Cluster Using Machine Learning

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Vision Based Feature Diagnosis For Automobile Instrument Cluster Using Machine Learning

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/320746682

Vision based feature diagnosis for automobile instrument cluster using machine
learning

Conference Paper · March 2017


DOI: 10.1109/ICSCN.2017.8085671

CITATIONS READS
5 439

2 authors:

Mohan Raj Sathiesh Kumar


Madras Institute of Technology Anna University, Chennai
12 PUBLICATIONS 93 CITATIONS 59 PUBLICATIONS 587 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Sathiesh Kumar on 30 November 2017.

The user has requested enhancement of the downloaded file.


2017 4th International Conference on Signal Processing, Communications and Networking (ICSCN -2017), March 16 – 18, 2017, Chennai, INDIA

Vision based feature diagnosis for automobile


instrument cluster using machine learning
M.Deepan Raj V.Sathiesh Kumar
Department of Electronics Engineering Department of Electronics Engineering
Madras Institute of Technology, Anna University Madras Institute of Technology, Anna University
Chennai-600044, India Chennai-600044, India
[email protected] [email protected]

Abstract— This papers deals with an advanced and effective cluster should be reliable. So, the testing methodologies have
approach for testing system, by utilizing the hardware-in-the- to be more rigorous and thorough in testing the functionalities.
loop (HIL) with the vision based machine learning technique to It is very difficult to do efficient testing without any errors as
make end to end automation in the feature diagnosis and it needs intensive labor work. Also most of the Instrument
validation of automotive instrument clusters. Recently, numerous
cluster functionality cannot be proved by just monitoring and
HIL systems are in practice for simulating the vehicle networks
in real time, by providing necessary signals based on the test comparing the I/O line output and inputs or from CAN
cases. There are many approaches to tap the signal from the communication lines. For a better understanding if a telltale is
instrument cluster before it gets displayed, and based on the test switched based on the test cases given by the tester or to
case the signal that is captured will be compared with the display Odometer value in the Thin Film Transistor (TFT)
expected value. The current approaches deal only at the software display of IC, manual observation is the only way to check for
level and fails in identifying the faults in the end display unit of the functionality by visually observing the output. This kind of
cluster. The proposed method uses vision based machine learning testing will become more and more tedious when the main
system to monitor the cluster visually thereby identifying faults message unit on the IC ECU under test, again monitoring
in cluster at the end product level. This approach greatly eases
hardware lines or the CAN would not tell whether the
the task of testing for more number of units by making onerous
repeated test without any human intervention, as the current functionality is as expected.
testing method needs human approval for each and every test In order to overcome this problem, automated testing
case which is tedious task to do.
techniques have been incorporated by the automotive
Keywords: Hardware-in-the-loop, Machine Vision, Machine manufacturers [1-3]. Hardware in the Loop (HIL) technology
Learning, Automated testing for Instrument cluster. was introduced for automated testing and validation of
Instrument cluster, power train, infotainment systems etc. [4].
This technique was very successful, as it has the ability to
I. INTRODUCTION
perform dynamic testing. This method greatly reduces the
In recent years, there has been an immense augmentation timing required for testing with less user intervention.
in electronically controlled components of cars. The auto Machine vision systems are being implemented in almost all
industry is going up against an over the top test in the flawless the fields like automotive [5-7], robotics [8], and many other
movement of devices and programming structures inside fields [9-11].
limited progression timescales. This work intends to increase
confide in the framework and use of complex auto electrical The proposed system utilizes the HIL technique combined
systems. with Vision based machine learning approach, making it a
novel approach in the field of visual based testing. The HIL
The Electronic Control Unit (ECU) for instrument cluster system may vary from company to company, so this paper
(IC) is one of the most complex embedded control systems in focuses on how the visual based machine learning approach
automobiles. The IC will be responsible for displaying all the works by keeping a generic HIL system. Integrating Vision
necessary information for the driver and responds to the user and HIL technique making it as a fully automated testing in
commands immediately. All the functional ECU inside the car which no user intervention is needed. For detecting gauges,
will be connected with IC, which gets all the information from warning lights/telltales, information displayed in TFT,
various ECU’s and relay information to the user. Control Area machine learning algorithms are used to detect the focus of
Network (CAN) bus protocol will be used for sending and interest, extract information and there by comparing the actual
receiving information between the ECU and IC. Instrument output with the expected output. This approach will reduce the
clusters for a modern car involves many operation, as it time required for testing the instrument cluster, as no user
displays information’s like driving condition, fault input is needed in between the testing process; also it can
diagnostics, warning signals, navigation, reminder, make onerous repeated tests possible.
infotainment etc. Information displayed using instrument

978-1-5090-4740-6/17/$31.00 ©2017 IEEE


2017 4th International Conference on Signal Processing, Communications and Networking (ICSCN -2017), March 16 – 18, 2017, Chennai, INDIA

II. SYSTEM OVERVIEW TABLE 1 CAMERA SPECIFICATIONS


Parameter Value
Complete system comprises of Hardware-in-the-loop and
Resolution 1280 x 720 pixels
vision system. In the current technique, a dedicated computer Technology Logitech Fluid Crystal Technology
will be connected to the HIL system, Functionality test plan Autofocus Yes
combined with the Visual Basic (VB) back grounded Excel Connectivity High-speed USB 2.0
sheet will be software that gives the command to HIL from the Photos Upto 8 megapixels
Autolight
computer. Based on the condition for the test case, the HIL Yes
correction
will be simulating the environment needed for the test case to It is mainly used for real-time vision related application, as
the IC. Based on the output displayed in the IC, the HIL will it is built to provide common infrastructure to accelerate the
retrieve data from the RAM memory buffer of IC to the use of machines in all real time applications. By using
computer for validation. The Higher level abstraction is shown OpenCV in the host computer algorithms required for learning
in the fig 1. the image and compare with the test case expected output will
be utilized for overall automation.
B. HIL Tester:
Many automotive cockpit electronics manufacturer uses
their tailor made tester required for their product. In general,
HIL tester consists of various interface links needed for
communicating with other device like, CAN, A/D input and
output, RS232 etc. It provides various simulated CAN signals,
provides inputs to cluster and gets output from it. For our case
it has to send a trigger signal to the camera when each test
case is executed and maintain the sync with the camera and
the test case execution.
C. Host Computer:
The computer will be connected to the HIL and camera
for providing inputs and retrieving outputs from it.
Functionality test plans will be created in excel linked with the
visual basic, thereby connecting itself with the HIL. Based on
the test cases, it will command the HIL to provide input or
retrieve output from the IC. Additionally for this proposed
system, it will acquire images from the camera and compares
Fig. 1 Higher Level Description
with the database for generating results for the test cases.
D. Instrument Cluster:
Inclusion of a camera in the proposed method will ease the
As an experimental example, Renault car high variant
testing process by monitoring the IC. For each test case the
instrument cluster is shown in the fig. 2.
HIL will provide data to the IC and simultaneously will trigger
the camera to take a snapshot of the IC under test. The images
captured will be sent to the host computer with the timestamps
and thereby giving acknowledgement to the HIL stating that
the image is captured and it is ready for next text case. The
host compare will compare the actual image with the expected
image for the given test case using machine learning
algorithms and generates the test results. The same operation
will be repeated for all the possible test cases given by the user
and the process can be automatically repeated for all the IC for
testing. Fig. 2 Instrument Cluster
A. Machine Vision System: It has the following major components, namely
1. Speedometer analog Gauge will be available for indicating
This system consists of a camera fixed on the top of the the vehicle speeds in km/h and fuel indicator based on the
instrument cluster for acquiring images of test case results. signal received from the ECU’s.
The technical specifications of the camera are shown in the 2. Tachometer analog Gauge will be available for indicating
TABLE 1. OpenCV (Open Source Computer Vision) [12] is a engine speeds in revolutions per minute (RPM) and engine
library of programming functions for computer vision and temperature indicator.
machine learning techniques.

978-1-5090-4740-6/17/$31.00 ©2017 IEEE


2017 4th International Conference on Signal Processing, Communications and Networking (ICSCN -2017), March 16 – 18, 2017, Chennai, INDIA

3. 18 Warning telltales will be available for indicating various measured only with the help of color statistics. For example,
warning signals like Anti-lock Braking System (ABS) fault, in the instrument cluster, if the date field in the set birthday
Brake fluid low, Seat belt status, Air Bag Failure etc. menu is different but it contains the same number in different
4. Indicator telltales will be available for indicating fog light location. If the expected value is 32 and the actual value is 23,
status, left and right turn indicator status etc. this method will not identify the difference as the Euclidean
5. TFT display in between of the two gauges will be available, distance will be the same for both the image as shown in the
acting as a message center showing important messages like fig 5.
navigation, anniversary reminders, fuel computer, time and
date etc.

III. VISION BASED MACHINE LEARNING APPROACH


A. Feature Vector and Image Descriptor
Inorder to applying learning algorithms to an image, it has
to be quantified and abstractly represented in the form of
numbers. The process of quantifying an image is called feature
extraction. The process of feature extraction has its own rules,
algorithms, and methodologies that can be used to abstractly
quantify the contents of an image using only a list of numbers
and represent it as a feature vector for the given image.
Feature vector can be formed using Image and feature
descriptor. Image descriptors are extracted globally from an
image that has one feature vector output. On the other hand,
feature descriptor describes the interesting spots in the image
that can be used for extraction, and it can have multiple
feature vectors for single image.
For the proposed system, image descriptor proves to be
the right technique shown in the fig.3.

Fig.3. Extracting Feature vector using Image descriptor

B. Color Channel model: Fig.5. Color Channel method


Color channel method is one of the easiest and fastest
image descriptor for extracting feature vector for an image. C. MSE and SSIM:
Even in more similar images, the distribution of color will not In order to compare the images for greater accuracy in
be the same. Using this as an ideology, feature vector can be which color channel failed, MSE (Mean Square Error) and
extracted for the captured image displayed in IC and compares SSIM (Structural Similarity Measure) can be used for
it with the feature vector of expected output image. In this matching the result with the database.
technique, the feature vector will be in form of mean and Given a noise-free mxn monochrome image I and its
variance. The mean and variance will be calculated for each noisy approximation K (in this case, query image is used),
color channel namely Red, Green, and Blue as shown in the MSE can be calculated as per Eq.2
fig.4.

Fig.4. Color Channel descriptor Given the two images x and y of same size, SSIM can be
Inorder to find the similarity between the actual image calculated as per Eq.3.
and the expected image, Euclidean distance is applied between
the images. It takes the sum of squared difference between
each entry in the p and q vectors. (p is the actual image feature
vector and q is the expected image feature vector) as per the Where,
equation 1. μx and μy are the average of x and y, and are the
(1) variance of x and y, is the covariance of x and y, c1 and c2
And thus, if the Euclidean distance is equal to zero then are the two variable to stabilize the division with weak
it is a perfect match. But similarity of the image cannot be denominator.

978-1-5090-4740-6/17/$31.00 ©2017 IEEE


2017 4th International Conference on Signal Processing, Communications and Networking (ICSCN -2017), March 16 – 18, 2017, Chennai, INDIA

Preprocessed (resizing and converted to gray scale)


images are used in estimating the MSE and SSIM metrics as
shown in Fig. 6.

Fig.6. Pre-processing
This method provides a quantitative score for the degree
of similarity/fidelity and the level of error/distortion between
the images. The MSE will return the value ‘0’ and SSIM will
return the value ‘1’ if both the images are structurally same.
Fig. 7 represents two conditions, in the first condition both the
images are perfectly same so the values were MSE=0 and
SSIM=1. But for the second condition a small arrow has Fig.8. Result 1
popped out in the output and so the values were different for No memory intensive algorithms like Histogram of
MSE. And so by combining the Color channel technique with Gradients (HOG), Local Binary Pattern (LBP) was used.
MSE and SSIM, the prediction rate is flawless. This technique Simple, fast and accurate are the key things kept in mind and
can find the match between the actual and expected image achieved through color channel, MSE and SSIM technique.
from IC, for all the given test cases.

Fig.9. Result 2
User just have to connect the Instrument cluster with HIL
and start the FTP excel, after that all the test cases will run
Fig 7. Calculating MSE and SSIM
automatically, from comparing the snapshots it will generate
The result will be processed and will be written pass/fail
result and stores the result in the excel sheet, just like a human
based on the comparison in the excel sheet, respective to the
being visually check the output. This method helps in testing
test cases written.
the instrument clusters automatically without any human
intervention, saving time and man power.
IV. RESULT AND CONCLUSION
Using the machine vision based learning coupled with the REFERENCES
HIL, makes the system invincible for the automatic testing [1] A. Mouzakitis, D. Copp and R. Parker. “A hardware-in-the-loop system
with no human intervention. By utilizing the above mentioned for testing automotive controller diagnostic software”. Proceedings of the
image processing technique (MSE and SSIM) the matching Sixteenth International Conference on Systems Engineering, (ICSE2003),
rate is very accurate to cent percentage. Coventry, UK, vol. 2, pp. 589-594, 2003.
[2] S. J. Lee, Y. J. Kim, K. Park and D. J. Kim. “Development of hardware-
On testing it in real time with different scenarios, the in-the-loop simulator and vehicle dynamic model for testing ABS”. SAE
Technical Paper Series, 2003-01-0858, 2003.
matching accuracy is too good to make any mistakes. In fig 8,
[3] T. Bertram, F. Bekes, R. Greul, O. Hanke, J. Hab, J. Hilgert, M. Miller, O.
the algorithm identifies the changes in the number on set date Ottgen, P. Opgen-Rhein, M. Torlo and D. Ward. “Modelling and simulation
field and producing accurate result, for which the color for mechatronic design in automotive.
channel algorithm failed to identify the difference. Similarly [4] A. Mouzakitis, R. Humphrey, P. Bennett, and K. J. Burnham,
“Development, Testing and Validation of Complex Automotive Systems”,
in fig. 9 even a slight change in the images were identified, The 10th Mechatronics Forum Biennial International Conference,
based on the match/not-match value will be written in the Philadelphia, USA, 2006.
excel sheet respective to the test cases. This method will be [5] “Development of a machine vision system for automotive part
very handy for testing many numbers of units as speed of inspection”, Proceedings of SPIE – The International Society for Optical
matching will be high. Engineers, ICMIT 2005: Information Systems and Signal Processing, vol.
6041, pp. 60412J1 – 6, 2005.

978-1-5090-4740-6/17/$31.00 ©2017 IEEE


2017 4th International Conference on Signal Processing, Communications and Networking (ICSCN -2017), March 16 – 18, 2017, Chennai, INDIA

[6] P. Hage and B. Jones, “Machine vision-based quality control systems for
the automotive industry”, Assembly Automation, vol. 15, no. 4, pp. 32 – 34,
1995.
[7] A. Shafi, “Machine Vision in Automotive Manufacturing”, Sensor
Review, vol. 24, no. 4, pp 337-342, 2004. [8] Koichi Sakata and Hiroshi
Fujimoto, “Perfect Tracking Control of Servo Motor Based on Precise Model
with PWM Hold and Current Loop” In Power Conversion Conference (PCC
'07), DOI:10.1109/PCCON.2007.373180, pp. 1612 - 1617, 2007.
[8] S. Yang, M. Cho, H. Lee and T. Cho, “Weld line detection and process
control for welding automation”, Measurement Science and Technology, vol.
18, pp. 819 – 826, 2007.
[9] J. C. Noordam, G. W. Otten, A. J. Timmermans and B. H. Van Zwol,
“High-speed potato grading and quality inspection based on a colour vision
system”, Proceedings of the SPIE, Machine vision applications in industrial
inspection VIII, vol.3966, pp. 206-217, 2000.
[10] C. Pellerin, “Machine vision in experimental poultry inspections”, Sensor
Review, MCB University Press, vol. 15, no. 4, pp.23 – 24, 1995.
[11] C. Qixin, F. Zhuang, X. Nianjiong and F. L. Lewis, “A binocular
machine vision system for ball grid array package inspection”, Assembly
Automation, Emerald Group Publishing, vol. 25, no. 3, pp. 217 – 222, 2005.
[12] www.opencv.org

978-1-5090-4740-6/17/$31.00 ©2017 IEEE

View publication stats

You might also like