Vision Based Feature Diagnosis For Automobile Instrument Cluster Using Machine Learning
Vision Based Feature Diagnosis For Automobile Instrument Cluster Using Machine Learning
net/publication/320746682
Vision based feature diagnosis for automobile instrument cluster using machine
learning
CITATIONS READS
5 439
2 authors:
All content following this page was uploaded by Sathiesh Kumar on 30 November 2017.
Abstract— This papers deals with an advanced and effective cluster should be reliable. So, the testing methodologies have
approach for testing system, by utilizing the hardware-in-the- to be more rigorous and thorough in testing the functionalities.
loop (HIL) with the vision based machine learning technique to It is very difficult to do efficient testing without any errors as
make end to end automation in the feature diagnosis and it needs intensive labor work. Also most of the Instrument
validation of automotive instrument clusters. Recently, numerous
cluster functionality cannot be proved by just monitoring and
HIL systems are in practice for simulating the vehicle networks
in real time, by providing necessary signals based on the test comparing the I/O line output and inputs or from CAN
cases. There are many approaches to tap the signal from the communication lines. For a better understanding if a telltale is
instrument cluster before it gets displayed, and based on the test switched based on the test cases given by the tester or to
case the signal that is captured will be compared with the display Odometer value in the Thin Film Transistor (TFT)
expected value. The current approaches deal only at the software display of IC, manual observation is the only way to check for
level and fails in identifying the faults in the end display unit of the functionality by visually observing the output. This kind of
cluster. The proposed method uses vision based machine learning testing will become more and more tedious when the main
system to monitor the cluster visually thereby identifying faults message unit on the IC ECU under test, again monitoring
in cluster at the end product level. This approach greatly eases
hardware lines or the CAN would not tell whether the
the task of testing for more number of units by making onerous
repeated test without any human intervention, as the current functionality is as expected.
testing method needs human approval for each and every test In order to overcome this problem, automated testing
case which is tedious task to do.
techniques have been incorporated by the automotive
Keywords: Hardware-in-the-loop, Machine Vision, Machine manufacturers [1-3]. Hardware in the Loop (HIL) technology
Learning, Automated testing for Instrument cluster. was introduced for automated testing and validation of
Instrument cluster, power train, infotainment systems etc. [4].
This technique was very successful, as it has the ability to
I. INTRODUCTION
perform dynamic testing. This method greatly reduces the
In recent years, there has been an immense augmentation timing required for testing with less user intervention.
in electronically controlled components of cars. The auto Machine vision systems are being implemented in almost all
industry is going up against an over the top test in the flawless the fields like automotive [5-7], robotics [8], and many other
movement of devices and programming structures inside fields [9-11].
limited progression timescales. This work intends to increase
confide in the framework and use of complex auto electrical The proposed system utilizes the HIL technique combined
systems. with Vision based machine learning approach, making it a
novel approach in the field of visual based testing. The HIL
The Electronic Control Unit (ECU) for instrument cluster system may vary from company to company, so this paper
(IC) is one of the most complex embedded control systems in focuses on how the visual based machine learning approach
automobiles. The IC will be responsible for displaying all the works by keeping a generic HIL system. Integrating Vision
necessary information for the driver and responds to the user and HIL technique making it as a fully automated testing in
commands immediately. All the functional ECU inside the car which no user intervention is needed. For detecting gauges,
will be connected with IC, which gets all the information from warning lights/telltales, information displayed in TFT,
various ECU’s and relay information to the user. Control Area machine learning algorithms are used to detect the focus of
Network (CAN) bus protocol will be used for sending and interest, extract information and there by comparing the actual
receiving information between the ECU and IC. Instrument output with the expected output. This approach will reduce the
clusters for a modern car involves many operation, as it time required for testing the instrument cluster, as no user
displays information’s like driving condition, fault input is needed in between the testing process; also it can
diagnostics, warning signals, navigation, reminder, make onerous repeated tests possible.
infotainment etc. Information displayed using instrument
3. 18 Warning telltales will be available for indicating various measured only with the help of color statistics. For example,
warning signals like Anti-lock Braking System (ABS) fault, in the instrument cluster, if the date field in the set birthday
Brake fluid low, Seat belt status, Air Bag Failure etc. menu is different but it contains the same number in different
4. Indicator telltales will be available for indicating fog light location. If the expected value is 32 and the actual value is 23,
status, left and right turn indicator status etc. this method will not identify the difference as the Euclidean
5. TFT display in between of the two gauges will be available, distance will be the same for both the image as shown in the
acting as a message center showing important messages like fig 5.
navigation, anniversary reminders, fuel computer, time and
date etc.
Fig.4. Color Channel descriptor Given the two images x and y of same size, SSIM can be
Inorder to find the similarity between the actual image calculated as per Eq.3.
and the expected image, Euclidean distance is applied between
the images. It takes the sum of squared difference between
each entry in the p and q vectors. (p is the actual image feature
vector and q is the expected image feature vector) as per the Where,
equation 1. μx and μy are the average of x and y, and are the
(1) variance of x and y, is the covariance of x and y, c1 and c2
And thus, if the Euclidean distance is equal to zero then are the two variable to stabilize the division with weak
it is a perfect match. But similarity of the image cannot be denominator.
Fig.6. Pre-processing
This method provides a quantitative score for the degree
of similarity/fidelity and the level of error/distortion between
the images. The MSE will return the value ‘0’ and SSIM will
return the value ‘1’ if both the images are structurally same.
Fig. 7 represents two conditions, in the first condition both the
images are perfectly same so the values were MSE=0 and
SSIM=1. But for the second condition a small arrow has Fig.8. Result 1
popped out in the output and so the values were different for No memory intensive algorithms like Histogram of
MSE. And so by combining the Color channel technique with Gradients (HOG), Local Binary Pattern (LBP) was used.
MSE and SSIM, the prediction rate is flawless. This technique Simple, fast and accurate are the key things kept in mind and
can find the match between the actual and expected image achieved through color channel, MSE and SSIM technique.
from IC, for all the given test cases.
Fig.9. Result 2
User just have to connect the Instrument cluster with HIL
and start the FTP excel, after that all the test cases will run
Fig 7. Calculating MSE and SSIM
automatically, from comparing the snapshots it will generate
The result will be processed and will be written pass/fail
result and stores the result in the excel sheet, just like a human
based on the comparison in the excel sheet, respective to the
being visually check the output. This method helps in testing
test cases written.
the instrument clusters automatically without any human
intervention, saving time and man power.
IV. RESULT AND CONCLUSION
Using the machine vision based learning coupled with the REFERENCES
HIL, makes the system invincible for the automatic testing [1] A. Mouzakitis, D. Copp and R. Parker. “A hardware-in-the-loop system
with no human intervention. By utilizing the above mentioned for testing automotive controller diagnostic software”. Proceedings of the
image processing technique (MSE and SSIM) the matching Sixteenth International Conference on Systems Engineering, (ICSE2003),
rate is very accurate to cent percentage. Coventry, UK, vol. 2, pp. 589-594, 2003.
[2] S. J. Lee, Y. J. Kim, K. Park and D. J. Kim. “Development of hardware-
On testing it in real time with different scenarios, the in-the-loop simulator and vehicle dynamic model for testing ABS”. SAE
Technical Paper Series, 2003-01-0858, 2003.
matching accuracy is too good to make any mistakes. In fig 8,
[3] T. Bertram, F. Bekes, R. Greul, O. Hanke, J. Hab, J. Hilgert, M. Miller, O.
the algorithm identifies the changes in the number on set date Ottgen, P. Opgen-Rhein, M. Torlo and D. Ward. “Modelling and simulation
field and producing accurate result, for which the color for mechatronic design in automotive.
channel algorithm failed to identify the difference. Similarly [4] A. Mouzakitis, R. Humphrey, P. Bennett, and K. J. Burnham,
“Development, Testing and Validation of Complex Automotive Systems”,
in fig. 9 even a slight change in the images were identified, The 10th Mechatronics Forum Biennial International Conference,
based on the match/not-match value will be written in the Philadelphia, USA, 2006.
excel sheet respective to the test cases. This method will be [5] “Development of a machine vision system for automotive part
very handy for testing many numbers of units as speed of inspection”, Proceedings of SPIE – The International Society for Optical
matching will be high. Engineers, ICMIT 2005: Information Systems and Signal Processing, vol.
6041, pp. 60412J1 – 6, 2005.
[6] P. Hage and B. Jones, “Machine vision-based quality control systems for
the automotive industry”, Assembly Automation, vol. 15, no. 4, pp. 32 – 34,
1995.
[7] A. Shafi, “Machine Vision in Automotive Manufacturing”, Sensor
Review, vol. 24, no. 4, pp 337-342, 2004. [8] Koichi Sakata and Hiroshi
Fujimoto, “Perfect Tracking Control of Servo Motor Based on Precise Model
with PWM Hold and Current Loop” In Power Conversion Conference (PCC
'07), DOI:10.1109/PCCON.2007.373180, pp. 1612 - 1617, 2007.
[8] S. Yang, M. Cho, H. Lee and T. Cho, “Weld line detection and process
control for welding automation”, Measurement Science and Technology, vol.
18, pp. 819 – 826, 2007.
[9] J. C. Noordam, G. W. Otten, A. J. Timmermans and B. H. Van Zwol,
“High-speed potato grading and quality inspection based on a colour vision
system”, Proceedings of the SPIE, Machine vision applications in industrial
inspection VIII, vol.3966, pp. 206-217, 2000.
[10] C. Pellerin, “Machine vision in experimental poultry inspections”, Sensor
Review, MCB University Press, vol. 15, no. 4, pp.23 – 24, 1995.
[11] C. Qixin, F. Zhuang, X. Nianjiong and F. L. Lewis, “A binocular
machine vision system for ball grid array package inspection”, Assembly
Automation, Emerald Group Publishing, vol. 25, no. 3, pp. 217 – 222, 2005.
[12] www.opencv.org