0% found this document useful (0 votes)
80 views

Wheel Alignment Inspectionby 3 DPoint Cloud Monitoring

This document summarizes a research paper that proposes a new method for wheel alignment inspection using 3D point clouds from low-cost depth sensors. The researchers developed a prototype system using the Microsoft Kinect sensor to directly measure the 3D orientation of wheels from point clouds, avoiding the need for specialized targets and complex image processing used in existing vision-based systems. They conducted experiments comparing their method to a commercial system and found it provided satisfactory performance, demonstrating the feasibility of using depth sensors for practical wheel alignment inspection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

Wheel Alignment Inspectionby 3 DPoint Cloud Monitoring

This document summarizes a research paper that proposes a new method for wheel alignment inspection using 3D point clouds from low-cost depth sensors. The researchers developed a prototype system using the Microsoft Kinect sensor to directly measure the 3D orientation of wheels from point clouds, avoiding the need for specialized targets and complex image processing used in existing vision-based systems. They conducted experiments comparing their method to a commercial system and found it provided satisfactory performance, demonstrating the feasibility of using depth sensors for practical wheel alignment inspection.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/264544791

Wheel alignment inspection by 3D point cloud monitoring

Article  in  Journal of Mechanical Science and Technology · April 2014


DOI: 10.1007/s12206-014-0133-3

CITATIONS READS

3 2,979

3 authors, including:

Dongyoub Baek Sungmin Cho


Seoul National University DELVINE Inc
10 PUBLICATIONS   32 CITATIONS    21 PUBLICATIONS   85 CITATIONS   

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Sungmin Cho on 28 January 2015.

The user has requested enhancement of the downloaded file.


Wheel Alignment Inspection by 3D Point Cloud Monitoring†

Dongyoub Baek1, Sungmin Cho1 and Hyunwoo Bang1,*


1
School of Mechanical and Aerospace Engineering, Seoul National University, Seoul 151-742, Korea

Abstract

Today’s wheel alignment inspection systems adopt various computer-vision


technologies. They, however, require high-end cameras, precisely manufactured targets, and
massive calculation loops because they rely on low-dimensional data (two-dimensional
images) for measuring higher-dimensional information (three-dimensional orientation) of the
wheel posture. To improve this, we present a simple and inexpensive method using
consumer-grade depth-sensing cameras. It directly utilizes point clouds acquired from a low-
cost depth-sensing camera such as Kinect. All points within the region of interest (ROI)
contain geometrical information of the wheel and are used for the alignment inspection
procedures. In this paper, we evaluate its feasibility by examining whether the orientation
could be aligned to the desired orientation using only the point clouds. By implementing a
one-wheel-based prototype, we conducted comparative experiments with an existing
commercial system. The results showed that the proposed method provides satisfactory
performance. We believe that our method is feasible for practical usage and has a great
potential to be an effective alternative to existing wheel alignment inspection methods.
Keywords: Wheel alignment; Point cloud; Depth-sensing camera; Kinect

1. Introduction

Digitizing and visualizing real-world objects in three dimensions have been one of
the primary research topics in the computer vision and graphics communities. The
development of 3D scanners and scanning methods has provided solutions to many
researchers. The surfaces of the physical objects can easily be digitized into a virtual world
and represented as a point cloud, which is composed of a large number of 3D points.
Moreover, the advancement of sensor technology and improved computer performance over
the past decade has led an improvement in the quality and the processing speed of points. As
higher density and accuracy have been achieved, the degree of geometrical error between
point cloud representation in the virtual world and physical objects in the real world has been
remarkably reduced. Accordingly, point clouds have become one of the main resources for
many engineers and designers. It is widely used in areas ranging from product development
to quality analysis in various industrial fields, such as manufacturing, reverse engineering,
and biomedical engineering. Recently, depth-sensing cameras among several types of 3D
scanners have become commonly used, as it is possible with these scanners to scan a subject
optically without any contact. Further, as it produces two-dimensional range images that have
depth value per pixel, point clouds can easily and rapidly be generated from the range image
so that it has as many 3D points as the total number of pixels in an image. In the past, depth-
sensing cameras were considered to be affordable only for major corporations or large
research institutes because they were very expensive, high-end products. However, recent
progress in 3D scanning technology and an increasing level of demand for point clouds have
led to the release of several consumer-grade depth sensing cameras. Microsoft Kinect,
released in November of 2010, is one such camera, which is available for only around $150.
Although Kinect was basically designed as a gaming interface for motion capture or gesture
recognition applications, its satisfactory performance and very affordable price drew the
attention of many researchers. Specifically, producing a range image stream in real time and
providing an open source software development kit make it possible to generate point clouds
stream in real time and to process it more easily. Many researchers in various engineering
communities, such as manufacturing, robotics, and bioengineering as well as in computer
vision and graphics have attempted to use it in their work. [1-6] As Kinect has provided
diverse opportunities for many researchers, more studies and applications based on Kinect
and point clouds have emerged. We also noticed the usefulness of this application, like many
other researchers. As an application in engineering, we attempted to utilize this technology
for automotive wheel alignment inspections.
Wheel alignment inspection is an important aspect of the automotive maintenance
industry. It involves measuring the geometric alignment among the four wheels of a vehicle
as well as the road and the automotive body and adjusting the alignment to the specifications
provided by the automotive manufacturer. One of the main issues related to wheel alignment
inspection is measuring the wheel posture as accurately and as quickly as possible. In the past,
it was difficult to achieve this with mechanical or electronic sensors. In the late 1990s,
however, various computer vision technologies were introduced and brought about
improvements in terms of the accuracy and measuring time. [7] Since then, most automotive
maintenance shops have adopted and used the computer-vision-based inspection system. The
computer-vision-based inspection system is generally composed of specially designed target
boards and image acquisition modules, including CCD video cameras and infrared light-
emitting diodes (IR LEDs) as illuminants. The target boards contain reflective materials with
specially designed shapes and patterns on their surfaces. To measure the orientation of the
wheel, it is mounted onto each rim, which serves as the edge of a wheel, and the reflective
materials on its surface are irradiated as the IR LED illuminates the target board. Each CCD
video camera continuously acquires two-dimensional images of each target board. Because
the acquired images contain the shape changes of the reflective materials, the three-
dimensional orientation and its transformation for each wheel are calculated based on
projective geometry. [8-9] However, because this method rely on low-dimensional data (two-
dimensional images) for measuring higher-dimensional information (three-dimensional
orientation), high-quality image acquisition and massive calculation loops are essential for
accurate measurements. The target boards have to be precisely designed and manufactured
and high-definition video cameras have to be used. These requirements make computer-
vision-based inspection systems expensive and complicated equipment. In spite of these
issues, previous studies focused on camera calibration [10-11] and target pattern detection
[12-14] to achieve better accuracy.
In this paper, we propose a simple and inexpensive solution based on point clouds
generated from a depth-sensing camera. Instead of a high-definition CCD video camera, we
used Kinect as a low-cost depth-sensing camera. Because it directly utilizes point clouds
acquired from Kinect, the three-dimensional information of the surface of the wheel itself or
a planar object mounted similar to a target board is directly obtained from the acquired point
clouds. Hence, a specially manufactured target and massive projective geometrical
calculation loops are not necessary with the proposed method. To verify the feasibility of the
proposed system, we implemented a one-wheel-based prototype and conducted comparative
experiments with an existing commercial inspection system. Based on the results, we
confirmed the feasibility for practical usage. In the following section, we provide a brief
overview of wheel alignment inspection. Section 3 describes the details of our inspection
method based on the point cloud alignment. The experimental setup and procedures to verify
the practical feasibility of the proposed system are described in section 4, and the results are
shown and discussed in section 5. In the last section, we provide our conclusion.

2. Overview of automotive wheel alignment inspection [15]

2.1 Definition and its importance


A wheel alignment inspection involves measuring the geometric alignment among
the four wheels, road surface, and automotive body, and adjusting the alignment within the
specifications provided by the automotive manufacturer. The conditions of wheel alignment
affect the directional stability, meaning the tendency of the vehicle to go straight when not
explicitly being steered. Directional stability is one of the most important factors pertaining to
driving safety and comfort. When the directional stability is proper, a driver can have
predictable directional control and the uneven wearing of tires can be prevented, as the
friction between the tires and the road is minimized. The conditions of wheel alignment are
often modified by various factors. Shocks caused by road variables while driving, linkage
deformation caused by a worn steering system or the changing of suspension parts such as
ball joints or control arms are the major factors that make the conditions incorrect.
Technicians recommend inspecting these conditions periodically.
The alignment conditions are represented as wheel alignment parameters. The major
parameters among various parameters are the toe, camber, and caster. Toe is defined as an
angle forming the centerline of a wheel with the centerline of the automotive body. It is
closely related to driving comfort, the tire lifespan, and fuel efficiency. In many cases, it is
measured in mm units, where 1 mm corresponds to 0.1 ̊ approximately for general wheel
sizes from 13 to 18 inches. Camber is defined as an angle forming the centerline of a wheel
and the vertical line of the road surface. It affects steering controllability and stability. Caster
is defined as an angle forming the steering axis of the wheel with the vertical line of the road
surface. It also affects steering controllability and stability, like the camber parameter. These
major parameters are always inspected and are only directly adjustable. The specifications for
each parameter are generally provided as range data, and they are usually from 0.5 ̊ to 2.0 .̊
Although other parameters, such as the steering axis inclination (SAI), setback, and thrust
angle are defined, they are inspected only when necessary.

2.2 Computer vision-based wheel alignment inspection system

As briefly explained in section 1, a computer-vision-based system is commonly used


in most automotive maintenance shops. The overall structure of this type of system is
depicted in Fig. 1. It is generally composed of four (one per wheel) target boards to reflect the
wheel postures, two image-acquisition modules including two (one per wheel) units of CCD
video cameras and IR LED illuminants, and a computer including software which provides
the manufacturer’s specifications and information about the current alignment conditions. In
particular, the CCD video cameras have a high definition of at least two megapixels, as the
measuring accuracy depends on the acquired image quality, and the targets have precisely
designed and manufactured patterns. Each target board is mounted onto each rim of the wheel
and is irradiated by the illuminants. Each CCD video camera continuously acquires two-
dimensional images of the target. The computer and software calculate the three-dimensional
information and wheel alignment parameters of each wheel by analyzing the two-dimensional
images and the related transformation based on projective geometry.
When the automobile is initially set for inspection, technicians move the steering
wheel to the left and right and the automobile back and forth to determine the rotation axis
and the rolling axis of each wheel. Once both axes are determined, the image transformations
are analyzed based on both axes and the alignment parameters are calculated. The calculated
alignment parameters and related conditions are provided in real time and are visually
represented as numerical values and colors, such as green for being within the specifications
and red for not being within the specifications. Technicians inspect and adjust the alignment
based on the visual information.

Fig. 1. The overall structure of the computer-vision-based wheel alignment inspection system
[16]

3. System configurations
As described in the previous section, the current commercial system requires high-
end video cameras, precisely manufactured targets, and massive calculation loops. Such
requirements make the system expensive. To improve this, we propose a new inspection
method that is simple and inexpensive. This method directly utilizes point clouds acquired
from a low-cost depth-sensing camera such as Kinect. When setting the wheel surface as the
region of interest (ROI), all points within the ROI contain geometrical information of the
wheel posture and its changes. Hence, the orientation of the wheel can be changed or aligned
based only on the point clouds.
In this paper, our purpose was to examine whether this new method could be used for
practical inspections. For this purpose, we implemented a one-wheel-based prototype. The
prototype was composed of Kinect and software for utilizing point clouds. The details are
described in the following subsections.

3.1 Microsoft Kinect Sensor

Because a wealth of information about Kinect is provided in the literature [17-19],


the basic specifications are briefly presented here. As shown in Fig. 2, Kinect includes an
infrared (IR) projector, an IR monochrome a CMOS camera, and an RGB camera. IR
projector projects a structured light pattern onto the subject, and a camera with 307200 (640 x
480) pixels captures the pattern and its deformation and produces range images at 30 frames
per second. The depth values are measured by a triangulation process and are quantized to
2048 steps (11 bits). The depth resolution depends on the measuring distance, and the system
is known to provide the best resolution at a distance over 0.8 m from the subject. [20] In our
prototype, the depth resolution and accuracy characteristics are most important for evaluating
the performance. Hence, we installed it at a distance around 0.9 m from the subject surface to
acquire the highest possible quality of depth data.
Fig. 2. Microsoft Kinect. An IR Projector projects a structured light pattern onto the subject,
and an IR camera captures the pattern and its deformation and produces range images. The
depth values are measured by a triangulation process.

3.2 Software for point cloud alignment

We implemented our own software for the point cloud alignment process. This
software handles the generation, capture, and alignment of the point clouds. The generation
part involves preprocessing raw range images acquired from Kinect and generating point
clouds from them. The capture part includes setting the ROI and saving a single point cloud
as the reference data. The alignment part includes checking for all points within the ROI as to
whether the currently generated points are aligned with the reference points, changing the
colors of the points which are not aligned, and providing a visual guide to show the alignment
directions to the reference points. More details are described in the following subsections.
The software was implemented using Processing programming tool [21] based on the Java
language and the OpenNI library. [22] Both are open-source projects that provide useful
functions to utilize point clouds.

3.2.1 Preprocessing and point cloud generation

Raw range images acquired from Kinect mostly include noisy data, as it uses infrared
light. The noise characteristic depends on various factors, such as the intensity of the ambient
light or the surface material of the subject. [19] Except for severe cases such as when the
surface contains high reflective materials or when strong ambient light exists, the noise can
be effectively reduced using a filtering algorithm. In our implementation, we adopted a
bilateral filtering algorithm that can be easily applied in real-time processes, as it is non-
iterative, local, and simple. With a non-linear combination of nearby pixel values based on
both geometric closeness (domain) and their photometric similarity (range), it smooths the
pixel values while preserving edges that have sharp variations. [23] After the noise reduction
process, camera calibration has to be conducted, as raw range images are affected by lens
distortion. Based on the pinhole camera model and camera-intrinsic parameters such as the
focal length and the principal point of the image, the distortion can be calibrated and the real-
world coordinates for each pixel can be obtained. [24] The camera-intrinsic parameters are
experimentally determined and provided in other Kinect-related studies. [25] These
preprocesses are conducted at every frame, during which point clouds with real-world
coordinates are generated.

3.2.2 ROI setting and capturing

When the generation of the point clouds begins, the ROI can be set in the overall
scene. As the points within the ROI are only used in the next alignment step, they have to be
measured accurately. However, when the surface of the subject has high-gloss areas, such
areas cannot be measured due to the reflection of infrared light patterns. This measurement
loss cannot be resolved through a noise reduction step during the preprocessing stage. When
attempting to set the wheel surface itself as the ROI in our implementation, such a problem
arose. Hence, high-gloss areas on the wheel surface have to be covered with a non-reflective
material before image capturing. To do this, we used matte adhesive film (450-08M, Avery,
USA), which is very thin, with a thickness of 0.075 mm, and which can easily be attached
and detached. As shown in Fig. 3a, the matte film was attached on the wheel surface. The
point cloud for the overall scene is shown in Fig. 3b, and the yellow circular line is the
boundary of the ROI. Note that no measurement loss occurred for the inner ROI area due to
the film. After setting the ROI, a single point cloud is captured as the reference data. All data
related to the ROI, in this case the position, radius, and total number of points within the ROI
is saved as well. The point cloud only extracted for the entire wheel including the tire is
presented in Fig. 3c. The points within the ROI are highlighted.
Fig. 3. The test setup and ROI setting: (a) Kinect is installed at a distance around 0.9 m from
the wheel surface, and matte film is attached to cover the high-gloss areas on the wheel
surface. (b) The ROI is set as the inner the area covered with film. Note that no measurement
loss by reflection occurs for all points within the ROI. (c) The point cloud extracted only for
the entire wheel including the tire is presented, and the points within the ROI are highlighted.

3.3.3 Point cloud alignment

Once reference point cloud is properly captured, the point clouds that are generated
are compared to the reference point cloud in real time. The depth values of all points within
the ROI in both point clouds are compared and the differences are calculated. Considering the
remaining small amounts of noise and the depth resolution of Kinect, we set the depth
tolerance to ± 2 mm from the reference data. If the difference was inside the tolerance level,
we assumed that the points were inliers. Otherwise, the points were classified as outliers. The
points classified as inliers were colored in their original color as acquired from the RGB
camera of Kinect. The points classified as outliers were colored in red or blue according to
their difference direction.
The overall alignment process and the details are depicted in Fig. 4. To show the
concept of our method in this section simply, the process was conducted while changing the
steering parameter instead of changing specific wheel alignment parameters. When the
reference point cloud is properly captured, all of the current points within the ROI are within
the tolerance guidelines and are classified as inliers because there is no steering change (see
Fig. 4a). The gauge on the right side of the screen shows the percentage of the number of
inliers over the number of total points within the ROI. This can help determine how many
current points are aligned with the reference points. When some steering changes occur, most
points within the ROI become outliers (see Fig. 4b). Depending on whether the difference is
negative or positive, the points classified as outliers appear in red or blue, respectively. Here,
negative or positive means the direction moving closer to Kinect or the direction moving
farther away from Kinect, respectively, as the depth value is measured as the distance from
the Kinect IR camera to the surface of the subject. The extent of the difference is presented as
color saturation. The saturation value increases with the difference value, and vice versa. In
addition, the directions for aligning the current points with the reference points are provided
as arrows nearby the ROI. By changing steering in the direction of the arrows, the current
points can be aligned with the reference points. When the steering is restored to the initial
position, all points within the ROI become inliers again and are restored their original colors
(see Fig. 4c).

Fig. 4. The scheme for the point cloud alignment procedure: (a) initially, the current point
cloud is aligned to the reference point cloud and all current points are within the tolerance. (b)
As the orientation of the wheel changes, outliers that are not within the tolerance arise. They
are colored in red for negative cases and in blue for positive cases. Also, arrows are provided
to show the directions of alignment to the reference point cloud. (c) After restoring the
orientation of the wheel, all current points are within the tolerance. The colors of the points
return to their initial colors, and the arrows disappear.
4. Experimental verification

To verify that our method is feasible for practical usage, we conducted comparative
experiments between our prototype and an existing commercial system. Based on the
measured value from the commercial system, we examined and evaluated the performance of
our prototype.

4.1 Experimental setup

For a comparative experiment, we chose a typical computer-vision-based inspection


system (DSP-600, Hunter, USA). It uses CCD video cameras capable of an extra high
definition of around five megapixels (2608 x 1952). The vehicle that we used in the
experiment had a MacPherson strut suspension system of the type very commonly used.
Because the toe was relatively simple to adjust among the adjustable parameters, we
conducted the experiment only with the toe value. Fig. 5 shows the overall setup for
experiment. Only one wheel on the front right side was used. As previously mentioned,
because Kinect has the best depth resolution at a distance that exceeds 0.8 m, we installed
Kinect at a distance of around 0.9 m from the target.

Fig. 5. The overall setup of the comparative experiment


In the commercial system, target boards should be fixed without any movement once
they are mounted to the wheels. Because the inspection and the adjustment processes are
performed based on the initially determined rotation and steering axis, the target boards
should not be attached or detached during the operation. Hence, we had to examine the
orientation change of the target board instead of the wheel surface itself. The target, however,
contains highly reflective material because the commercial system uses its reflective images,
which leads to measurement loss by the reflection of IR light when using Kinect. To cover the
reflective area, we attached matte film onto the target surface and carefully detached it when
required to check the value from the commercial system. Figs. 6a and 6b show the target
covered with the film and the point cloud of the overall scene with the ROI, respectively. Fig.
6c shows only the point cloud of the target board. The points within the ROI are highlighted,
and the total number of points was around 14,600.

Fig. 6. Target covering for the experiment and the ROI setting. (a) Matte film is attached onto
the target surface to cover the reflective material of the original target. (b) The ROI is set as
the inner area of the covered target area. Note that no measurement loss by reflection occurs
for all points within the ROI due to the film. (c) The point cloud only extracted for the target
board is presented. The points within the ROI are highlighted, and the total number of points
was around 14,600.

4.2 Experimental Procedure

The purpose of the experiment was to verify whether the alignment condition could
be restored from an arbitrarily modified condition to the original condition depending only on
our point cloud alignment method. To examine this, we conducted an experiment using the
following procedure with 10 trials. The overall procedure is also depicted in Fig. 7.
Fig. 7. The scheme of the overall experimental procedure.

A) Initially, we checked the initial value from the commercial inspection system. As
previously mentioned, we decided to use only the toe value. Hence, we recorded the toe value.
B) Then, we carefully attached the film onto the target surface to use Kinect. After covering
the target surface, we set the ROI inside the covered area and captured the initial point cloud
as the reference data. In case outliers accidently arose due to severe noise, we recaptured the
initial point cloud until there were no outliers.
C) After formulating the proper reference data, we arbitrary and sufficiently changed the toe
and checked whether the points had become outliers by noting their color and saturation.
D) To check the changed value from the commercial system, we carefully detached the film.
We then checked the changed toe value.
E) After covering the target surface again, we restored the toe depending only on our point
cloud alignment method. During the adjustment, we noted the color changes of the points, the
arrows for guiding the adjustment direction, and the gauge value presenting the extent of the
alignment. Considering that the adjustment was a delicate process and that it was conducted
manually, we assumed that the alignment was complete when the gauge value exceeded 98 %.
F) When the gauge value reached and remained over 98%, we carefully detached the film and
checked the restored toe value from the commercial system.
5. Results and Discussion

Table 1. The Experimental Results for 10 Trials

Reference Changed Restored Point Cloud


Error
Trial value value value Alignment
[mm]
[mm] [mm] [mm] [%]
1 1.0 5.6 1.0 0.0 99.94
2 1.5 8.8 1.4 -0.1 99.20
3 1.4 -5.9 1.6 0.2 98.36
4 0.8 -6.5 0.9 0.1 98.53
5 1.0 -6.1 1.3 0.3 98.15
6 1.3 9.8 1.9 0.6 98.46
7 1.0 8.6 1.4 0.4 98.62
8 1.1 6.2 1.1 0.0 99.69
9 1.1 -4.7 1.1 0.0 99.50
10 1.0 -4.4 1.1 0.1 98.59

Fig. 8. Error in each trial. The error in each trial was inside the tolerance (light blue zone) and
the restored result was therefore able to be regarded as being correctly aligned to the initial
reference condition.
Table 1 shows the results for all of the overall trials. As shown in Table 1, the
restored values for the first, eighth and ninth trials matched the reference values, while for the
second, third, fourth and tenth trials, the degree of error was within 0.2 mm in each case. Our
prototype showed an average error of 0.18 (± 0.199) mm between the reference value and the
restored value. Considering that the specification is provided as a range, which is commonly
from 0.5 mm to 2.0 mm, and that the tolerance is therefore as high as 1.5 mm, we can
evaluate our prototype as providing satisfactory results. It is shown in Fig. 8 that the error in
each trial was inside the tolerance and the restored result was therefore able to be regarded as
being correctly aligned to the initial reference condition. With regard to the degree of
deviation, it can be considered as an acceptable level as the commercial system experienced
fluctuation of the measured value of up to 0.2 mm while checking the values.

Fig. 9. Error sizes between the initial and restored values vs. the point cloud alignment
percentages

The relationships between the point cloud alignment percentages and the error sizes
are shown in Fig. 9. As explained in the previous section, we assumed that the adjustment
was complete when the point cloud alignment percentage exceeded 98 %. Accordingly, the
point cloud alignment showed a range of 98.0 % to 100.0 %. Overall, the tendency arose in
which the error size decreased with the percentage. Specifically, we noted the results of the
first, second, eighth, and ninth trials, in which the point cloud alignment was over 99 %. In
those results of four trials, the average error was only 0.05 (± 0.025) mm. Hence, we believe
that our current prototype will provide better alignment results if the criteria for the
adjustment completeness are assumed as exceeding 99 %.
From the overall results, we find that our point cloud alignment method provides a
reasonable level of performance and that it is feasible for practical usage. In the current
prototype, however, the results were sensitive to the point cloud alignment percentage, as the
number of points within the ROI was around only 14,600 out of 307,200 that Kinect is able to
generate. Hence, the sensitivity will be improved if more points are contained within the ROI.
In addition, as more points are used, the noise effect will also be reduced. For this, a lens with
an optimal focal length and field of view has to be designed. We are planning to design a new
lens for Kinect and conduct comparative experiments with the same procedure used in this
study. Further, the implementation of two- and four-wheel-based prototypes is also planned.
With two or four Kinect devices, a total inspection system can be implemented. The position
and the orientation of each camera can be obtained from camera-extrinsic parameters, and the
relationship between the cameras can be obtained. [24] All wheels can then be inspected at
the same time. Although four Kinect devices with new lenses are necessary, the cost for
building a total system would be much lower than that of the currently used commercial
system. Hence, we believe that our method is feasible for practical usage and has a great
potential to be an effective alternative to existing wheel alignment inspection methods.

6. Conclusions

We proposed a new wheel alignment inspection method that works via point cloud
alignment. To verify our method, we implemented a one-wheel-based prototype and
conducted comparative experiments with an existing commercial system. The experimental
results showed that it is feasible for practical usage as a wheel alignment inspection.
Particularly, when the degree of point cloud alignment exceeds 99 %, our prototype provides
the best performance and shows almost no difference as compared to a commercial system.
The sensitivity of the percentage will be improved in further research. With our method, a
total inspection system can be implemented at a much lower cost than the current commercial
system. Therefore, we believe that our method is feasible for practical usage and has a great
potential to be an effective alternative to existing wheel alignment inspection methods.
References

[1] J. Tong, J Zhou et al., Scanning 3D full human bodies using Kinects, IEEE Transactions
on Visualization and Computer Graphics, 18(4)(2012) 643-650.
[2] S. Izadi, D. Kim et al., KinectFusion: Real-time 3D reconstruction and interaction using a
moving depth camera, Proceedings of the 24th annual ACM symposium on User interface
software and technology, Cambridge, Massachusetts, USA (2011) 559-568.
[3] P. Henry, M. Krainin et al., RGB-D mapping: Using the Kinect-style depth cameras for
dense 3D modeling of indoor environments, The International Journal of Robotics
Research, 31(5)(2012) 647-663.
[4] S. Bauer, J. Wasza et al., Multi-modal surface registration for markerless initial patient
setup in radiation therapy using Microsoft’s Kinect sensor, Proceedings of the 13th IEEE
International Conference on Computer Vision Workshops, Barcelona, Spain (2011) 1175-
1181.
[5] J. Xia and R. A. Siochi, A real-time respiratory motion monitoring system using Kinect:
Proof of concept, Medical Physics, 39(5)(2012) 2682-2685.
[6] M. Alnowami, B. Alnwami et al., A quantitative assessment of using the Kinect for Xbox
360 for respiratory surface motion tracking, Proceedings of SPIE on Medical Imaging
2012, San Diego, California, USA (2012).
[7] D. J. Christian and H. Shroff, Machine vision-based alignment: space to factory to garage,
Proceedings of SPIE on Machine Vision Applications, Architectures, and Systems
Integration VI, Pittsburgh, Pennsylvania, USA (1997).
[8] B. F. Jackson, Method and apparatus for determining the alignment of motor vehicle
wheels, US Patent 5724743 (1998).
[9] L. Wenhao, G. Yan et al., Research on the machine vision system for vehicle four-wheel
alignment parameters, Proceedings of the 30th Chinese Control Conference, Yantai, China
(2011) 3192-3195.
[10] X. Hu, Study on calibration methods of two cameras in vehicle’s four-wheel alignment
based on 3D vision, Proceedings of the third International Congress on Image and Signal
Processing, Yantai, China (2010) 414-416.
[11] M. S. Park, J. W. Kwon et al., Experimental study on camera calibration and pose
estimation for the application to vehicle’s wheel alignment, Proceedings of the second
International Joint Conference on SICE-ICASE, Busan, Korea (2006) 2952-2957.
[12] X. Guan, S. Jian et al., An image enhancement method based on gamma correction,
Proceedings of the second International Symposium on Computational Intelligence and
Design, Changsha, China (2009) 60-63.
[13] X. Guan, S. Jian et al., A feature points matching method for calibration target images,
Proceedings of the second International Workshop on Computer Science and Engineering,
Quingda, China (2009) 263-266.
[14] Z. Tao, S. Changku et al., Monocular vision measurement system for the position and
orientation of remote object, Proceedings of SPIE on International Symposium on
Photoelectronic Detection and Imaging, Beijing, China (2007).
[15] D. Knowles, Today’s Technician – Shop Manual for Automotive Suspension & Steering
Systems, FourthEd. Thomson Delmar Learning, New York, USA (2007).
[16] http://www.hunter.com
[17] http://www.xbox.com/en-us/kinect/
[18] http://www.primesense.com
[19] C. D. Mutto, P. Zanuttigh, et al., Time-of-Filght Cameras and Microsoft KinectTM,
Springer, New York, USA (2012).
[20] K. Khoshelham and S. O. Elberink, Accuracy and resolution of Kinect depth data for
indoor mapping applications, Sensors, 12(2) (2012) 1437-1454.
[21] http://processing.org
[22] http://openni.org
[23] C. Tomasi and R. Manduchi, Bilateral filtering for gray and color images, Proceedings of
the sixth International Conference on Computer Vision, Bombay, India (1998) 839-846.
[24] B. Cyganek and J. P. Siebert, An Introduction to 3D Computer Vision Techniques and
Algorithms, John Wiley & Sons, New York, USA (2009).
[25] http://nicolas.burrus.name/index.php/Research/KinectCalibration/

View publication stats

You might also like