0% found this document useful (0 votes)
51 views6 pages

Homography-Based Visual Servo Tracking Control of A Wheeled Mobile Robot

A visual servo tracking controller is developed in this paper for a monocular camera system mounted on an underactuated wheeled mobile robot. A prerecorded image sequence (e.g., a video) of three target points is used to define a desired trajectory for the WMR. The information obtained by decomposing the Euclidean homography is used to develop a kinematic controller.

Uploaded by

thrithun
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views6 pages

Homography-Based Visual Servo Tracking Control of A Wheeled Mobile Robot

A visual servo tracking controller is developed in this paper for a monocular camera system mounted on an underactuated wheeled mobile robot. A prerecorded image sequence (e.g., a video) of three target points is used to define a desired trajectory for the WMR. The information obtained by decomposing the Euclidean homography is used to develop a kinematic controller.

Uploaded by

thrithun
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Proceedings of the 2003 IEEE/RSJ

Intl. Conference on Intelligent Robots and Systems


LasProceedings
Vegas, Nevada
of the· October 2003
2003 IEEE/RSJ
Intl. Conference on Intelligent Robots and Systems
Las Vegas, Nevada · October 2003

Homography-Based Visual Servo Tracking Control


of a Wheeled Mobile Robot
J. Chen† , W. E. Dixon‡ , D. M. Dawson† , and M. McIntire†

Department of Electrical & Computer Engineering, Clemson University, Clemson, SC 29634-0915

Eng. Science and Tech. Div. - Robotics, Oak Ridge Nat. Lab., P.O. Box 2008, Oak Ridge, TN 37831-6305
E-mail: [email protected]

Abstract wall following, follow-the-leader, and position regulation


tasks. In [16], Hager et al. used a monocular vision system
A visual servo tracking controller is developed in this pa-
mounted on a pan-tilt-unit to generate image-Jacobian and
per for a monocular camera system mounted on an under-
geometry-based controllers by using dierent snapshots of
actuated wheeled mobile robot (WMR) subject to nonholo-
the target and an epipolar constraint. As stated in [2], a
nomic motion constraints (i.e., the camera-in-hand prob-
drawback of the method developed in [16] is that the sys-
lem). A prerecorded image sequence (e.g., a video) of
tem equations became numerically ill-conditioned for large
three target points is used to define a desired trajectory
pan angles. Given this shortcoming, Burschka and Hager
for the WMR. By comparing the target points from the
[2] used a spherical image projection of a monocular vision
prerecorded sequence with the corresponding target points
system that relied on teaching and replay phases to facili-
in the live image, projective geometric relationships are ex-
tate the estimation of the unknown object height parameter
ploited to construct a Euclidean homography. The infor-
in the image-Jacobian by solving a least-squares problem.
mation obtained by decomposing the Euclidean homogra-
Spatiotemporal apparent velocities obtained from an op-
phy is used to develop a kinematic controller. A Lyapunov-
tical flow of successive images of an object were used in
based analysis is used to develop an adaptive update law
[26] to estimate the depth and time-to-contact to develop
to actively compensate for the lack of depth information
a monocular vision guide robot. A similar optical flow tech-
required for the translation error system.
nique was also used in [20]. In [9], Dixon et al. used feed-
I. Introduction back from an uncalibrated, fixed (ceiling-mounted) camera
to develop an adaptive tracking controller for a WMR that
Wheeled mobile robots (WMRs) are often required to compensated for the parametric uncertainty in the cam-
execute tasks in environments that are unstructured. Due era and the WMR dynamics. An image-based visual servo
to the uncertainty in the environment, an intelligent sensor controller that exploits an object model was proposed in
that can enable autonomous navigation is well motivated. [30] to solve the WMR tracking controller (the regulation
Given this motivation, researchers initially targeted the use problem was not solved due to restrictions on the reference
of a variety of sonar and laser-based sensors. Some initial trajectory) that adapted for the constant, unknown height
work also targeted the use of a fusion of various sensors of an object moving in a plane through Lyapunov-based
to build a map of the environment for WMR navigation techniques. In [21] and [33], visual servo controllers were
(see [17], [19], [28], [29], [31], and the references within). recently developed for systems with similar underactuated
While this is still an active area of research, various short- kinematics as WMRs. Specifically, Mahony and Hamel [21]
comings associated with these technologies and recent ad- developed a semi-global asymptotic visual servoing result
vances in image extraction/interpretation technology and for unmanned aerial vehicles that tracked parallel coplanar
advances in control theory have motivated researchers to linear visual features while Zhang and Ostrowski [33] used
investigate the sole use of camera-based vision systems for a vision system to navigate a blimp.
autonomous navigation. For example, using consecutive
image frames and an object database, the authors of [18] In contrast to the previous image-based visual servo con-
recently proposed a monocular visual servo tracking con- trol approaches, novel homography-based visual servo con-
troller for WMRs based on a linearized system of equa- trol techniques have been recently developed in a series of
tions and Extended Kalman Filtering (EKF) techniques. papers by Malis and Chaumette (e.g., [3], [4], [22], [23],
Also using EKF techniques on the linearized kinematic [24]). The homography-based approach exploits a combi-
model, the authors of [7] used feedback from a monocular nation of reconstructed Euclidean information and image-
omnidirectional camera system (similar to [1]) to enable space information in the control design. The Euclidean
information is reconstructed by decoupling the interaction
*This research was supported in part U.S. DOE O!ce of Biolog-
ical and Environmental Research (OBER) Environmental Manage- between translation and rotation components of a homog-
ment Sciences Program (EMSP) project ID No. 82797 at ORNL raphy matrix. As stated in [24], some advantages of this
for the DOE O!ce of Science (SC), a subcontract to ORNL by the methodology over the aforementioned approaches are that
Florida Department of Citrus through the University of Florida, and
by U.S. NSF Grant DMI-9457967, ONR Grant N00014-99-1-0589, a an accurate Euclidean model of the environment (or tar-
DOC Grant, and an ARO Automotive Center Grant. get image) is not required and potential singularities in the
0-7803-7860-1/03/$17.00
0-7803-7860-1/03/$17.00 © ©2003
2003IEEE
IEEE 1814
image-Jacobian are eliminated (i.e., the image-Jacobian for images of the stationary target. A fixed orthogonal coor-
homography-based visual servo controllers is typically tri- dinate system, denoted by F  , represents a fixed (i.e., a
angular). Motivated by the advantages of the homography- single snapshot) reference position and orientation of the
based strategy, several researchers have recently developed camera relative to the stationary target. Based on the def-
various regulation controllers for robot manipulators (see inition of these coordinate frames, the goal of this paper is
[5], [6], [8], [11], and [13]). In [12], a homography-based to develop a homography-based visual servo controller that
visual servo control strategy was recently developed to as- will force F to track the position and orientation trajectory
ymptotically regulate the position/orientation of a WMR provided by Fd .
to a constant Euclidean position defined by a reference im-
Desired Position &
age, despite unknown depth information. Orientation Trajectory
In this paper, a homography-based visual servo control yd
strategy is used to force the Euclidean position/orientation
of a camera mounted on a WMR (i.e., the camera in hand Fd
problem) to track a desired time-varying trajectory defined zd
by a prerecorded sequence of images. By comparing the
xd
features of an object from a reference image to features of y vc y*
an object in the current image and the prerecorded se- x
/c
quence of images, projective geometric relationships are z
F z*
exploited to enable the reconstruction of the Euclidean co- FW x*
ordinates of the target points with respect to the WMR Current Position
& Orientation Reference Position
coordinate frame. The tracking control objective is natu- & Orientation
rally defined in terms of the Euclidean space, however, the
translation error is unmeasurable. That is, the Euclidean Fig. 1. Mobile robot coordinate systems.
reconstruction is scaled by an unknown distance from the
camera/WMR to the target, and while the scaled position
is measurable through the homography, the unscaled posi- A. Geometric Model
tion error is unmeasurable. To overcome this obstacle, a In this section, geometric relationships are developed be-
Lyapunov-based control strategy is employed that provides tween the coordinate systems F, Fd , and F  , and a ref-
a framework for the construction of an adaptive update erence plane  that is defined by three target points Oi
law to actively compensate for the unknown depth-related ;i = 1, 2, 3 that are not collinear. The 3D Euclidean coor-
scaling constant. While similar techniques as in [12] are dinates of Oi expressed in terms of F, Fd , and F  as m̄i (t),
employed for the Euclidean reconstruction from the image m̄di (t), m̄i 5 R3 , respectively, are defined as follows (see
data for the WMR system, new development is provided Fig. 2)
in this paper to develop a tracking controller. In contrast £ ¤T
to visual servo methods that linearize the system equations m̄i (t) , xi (t) yi (t) zi (t) (1)
to facilitate EKF methods, the Lyapunov-based control de- £ ¤T
m̄di (t) , xdi (t) ydi (t) zdi (t)
sign in this paper is based on the full nonlinear kinematic £ ¤T
model of the vision system and the mobile robot system. m̄i , xi yi zi

II. Problem Formulation under the standard assumption that the distances from
the origin of the respective coordinate frames to the
As illustrated in Fig. 1, the origin of the orthogonal targets along the focal axis remains positive (i.e.,
coordinate system F attached to the camera is coincident xi (t) , xdi (t), xi  % > 0 where % is an arbitrarily small
with the center of the WMR wheel axis (i.e., the camera is positive constant). The rotation from F  to F is denoted
“in-hand”). As also illustrated in Fig. 1, the xy-axis of F by R (t) 5 SO(3), and the translation from F to F  is
defines the plane of motion where the x-axis of F is perpen- denoted by xf (t) 5 R3 where xf (t) is expressed in F.
dicular to the wheel axis, and the y-axis is parallel to the Similarly, Rd (t) 5 SO(3) denotes the desired time-varying
wheel axis. The z-axis of F is perpendicular to the plane of rotation from F  to Fd , and xf d (t) 5 R3 denotes the de-
motion and is located at the center of the wheel axis. The sired translation from Fd to F  where xf d (t) is expressed
linear velocity of the WMR along the x-axis is denoted by in Fd . Since the motion of the WMR is constrained to a
vc (t) 5 R, and the angular velocity $ c (t) 5 R is about the plane, xf (t) and xf d (t) are defined as follows
T-axis (see Fig. 1). The desired trajectory is defined by the
£ ¤T
prerecorded time-varying trajectory of Fd that is assumed xf (t) , xf 1 xf 2 0 (2)
to be second-order dierentiable. The desired trajectory is £ ¤T
obtained from a prerecorded set of images of a stationary xf d (t) , xf d1 xf d2 0 .
target viewed by the on-board camera as the WMR moves. From the geometry between the coordinate frames depicted
For example, the desired WMR motion could be obtained in Fig. 2, m̄i can be related to m̄i (t) and m̄di (t) as follows
as an operator drives the robot via a teach pendant, and
the on-board camera captures and stores the sequence of m̄i = xf + Rm̄i m̄di = xf d + Rd m̄i . (3)

1815
In (3), R (t) and Rd (t) are defined as follows trajectory of Fd . To facilitate the subsequent development,
5 6 the normalized Euclidean coordinates of Oi expressed in
cos   sin  0 terms of F, Fd , and F  as mi (t), mdi (t), mi 5 R3 , re-
R , 7 sin  cos  0 8, (4) spectively, are defined as follows
0 0 1
5 6 £ ¤T m̄i
cos d  sin d 0 mi , 1 miy miz =
x
Rd , 7 sin d cos d 0 8 £ ¤T i m̄di
0 0 1 mdi , 1 mdiy mdiz = (9)
xdi
£ ¤T m̄
where (t) 5 R denotes the right-handed rotation angle mi , 1 miy miz = i
xi
about zi (t) that aligns the rotation of F with F  , and
d (t) 5 R denotes the right-handed rotation angle about where m̄i (t), m̄di (t), and m̄i are introduced in (1). In
zdi (t) that aligns the rotation of Fd with F  . From Fig. 1 addition to having a Euclidean coordinate, each target
and the definitions of (t) and d (t), it is clear that point Oi will also have a projected pixel coordinate de-
noted by ui (t) , vi (t) 5 R for F, ui , vi 5 R for F  , and
˙ = $ c ˙ d = $ cd . (5) udi (t) , vdi (t) 5 R for Fd , that are defined as elements of
pi (t) 5 R3 (i.e., the actual time-varying image points),
The rotation angles are assumed to be confined to the fol-
pdi (t) 5 R3 (i.e., the desired image point trajectory), and
lowing regions
pi 5 R3 (i.e., the constant reference image points), respec-
 <  (t) <    < d (t) <  . (6) tively, as follows
£ ¤T £ ¤T
From the geometry given in Fig. 2, the distance d 5 R pi , 1 vi ui pdi , 1 vdi udi (10)
from F  to  along the unit normal is given by £ ¤T
pi , 1 vi ui .
d = nT m̄i (7)
The normalized Euclidean coordinates of the target points
£ ¤T are related to the image data through the following pinhole
where n = nx ny nz 5 R3 denotes the constant unit
lens models
normal to . From (7), the relationships in (3) can be
expressed as follows pi = Ami pdi = Amdi pi = Ami (11)
³ xf ´ ³ xf d ´
m̄i = R +  nT m̄i m̄di = Rd +  nT m̄i . where A 5 R3×3 is a known, constant, and invertible in-
d d
(8) trinsic camera calibration matrix.
Given that mi (t), mdi (t), and mi can be obtained from
(11), the rotation and translation between the coordinate
systems can now be related in terms of the normalized
Euclidean coordinates as follows
n*
Oi xi ¡ ¢
mi = R + xh nT mi
*
d

m*i xi | {z } (12)
|{z}
S i H
F*
mdi mi xi ¡ ¢
mdi = Rd + xhd nT mi
( xfd , Rd ) xdi | {z }
|{z} (13)
( xf , R ) di Hd
Fd F
where i (t) , di (t) 5 R denote the depth ratios, H (t) ,
Fig. 2. Coordinate frame relationships. Hd (t) 5 R3×3 denote Euclidean homographies, and xh (t) ,
xhd (t) 5 R3 denote scaled translation vectors that are de-
fined as follows
B. Euclidean Reconstruction £ ¤T xf
xh , xh1 xh2 0 =  (14)
The relationship given in (3) provides a means to quan- d
tify the translational and rotational error between F and £ ¤T xf d
xhd , xhd1 xhd2 0 =  .
F  and between Fd and F  . Since the position of F, d
Fd , and F  cannot be directly measured, this section il-
By using (4) and (14), the Euclidean homography in (12)
lustrates how the normalized Euclidean coordinates of the
can be rewritten as follows
target points can be reconstructed by relating multiple im- 5 6
ages. Specifically, comparisons are made between an image cos  + xh1 nx  sin  + xh1 ny xh1 nz
acquired from the camera attached to F, the reference im- H = 7 sin  + xh2 nx cos  + xh2 ny xh2 nz 8 . (15)
age, and the prerecorded sequence of images that define the 0 0 1

1816
By examining the terms in (15), it is clear that H(t) con- where xh1 (t), xh2 (t), xhd1 (t), and xhd2 (t) are introduced in
tains signals that are not directly measurable (e.g., (t), (14), and (t) and d (t) are introduced in (4). Based on the
xh (t), and n ). By expanding Hjk (t) ;j = 1, 2, k = 1, 2, 3, definition in (19), it can be shown that the control objective
the following expressions can be obtained from (9), (12), is achieved if the tracking error e(t) $ 0. Specifically, it
and (15) is clear from (14) that if e1 (t) $ 0 and e2 (t) $ 0, then
¡ ¢ xf (t) $ xf d (t). If e3 $ 0, then it is clear from (4) and (19)
1 = i H11 + H12 miy + H13 miz (16) that R(t) $ Rd (t). If xf (t) $ xf d (t) and R(t) $ Rd (t),
¡  
¢
miy = i H21 + H22 miy + H23 miz (17) then (3) can be used to prove that m̄i (t) $ m̄di (t).

miz = i miz . (18) A. Open-loop Error System
From (16)-(18), it is clear that three independent equa- As a means to develop the open-loop tracking error sys-
tions with nine unknowns (i.e., Hjk (t) ;j = 1, 2, k = 1, 2, 3 tem, the time derivative of the Euclidean position xf (t) is
and i (t) ;i = 1, 2, 3) can be generated for each target determined as follows [24]
point. Hence, by determining the normalized Euclidean
coordinate of three target points in F and F  from the ẋf = v + [xf ]× $ (20)
image data and (11), the unknown elements of H(t) and
the unknown ratio i (t) can be determined. Likewise, for where v(t), $(t) 5 R3 denote the respective linear and
the same three target points in Fd and F  , the unknown angular velocity of the WMR expressed in F as
elements of Hd (t) and the unknown ratio di (t) can be de- £ ¤T £ ¤T
termined. Once the elements of H(t) and Hd (t) are deter- v, vc 0 0 $, 0 0 $c , (21)
mined, various techniques (e.g., see [15], [32]) can be used
to decompose the Euclidean homographies to obtain the and [xf ]× denotes the 3×3 skew-symmetric form of xf (t).
rotation and translation components. Hence, R(t), Rd (t), After substituting (14) into (20), the time derivative of
xh (t), and xhd (t) are all known signals that can be used the translation vector xh (t) can be written in terms of the
for the subsequent control synthesis. Since R(t) and Rd (t) linear and angular velocity of the WMR as follows
are known matrices, then (4) can be used to determine (t) v
and d (t). ẋh =  + [xh ]× $ . (22)
d
Remark 1: To develop a tracking controller, it is typical
that the desired trajectory is used as a feedforward compo- After incorporating (21) into (22), the following expression
nent in the control design. Hence, for a kinematic controller can be obtained
the desired trajectory is required to be at least first order vc
dierentiable and at least second order dierentiable for ẋh1 =  + xh2 $ c
d (23)
a dynamic level controller. From the Euclidean homogra- ẋh2 = xh1 $ c
phy introduced in (13), md (t) can be expressed in terms
of the a priori known, functions di (t), Hd (t), Rd (t), and where (14) was utilized. Given that the desired trajectory
xhd (t). Since these signals can be obtained from the pre- is generated from a prerecorded set of images taken by
recorded sequence of images, su!ciently smooth functions the on-board camera as the WMR was moving, a similar
can be generated for these signals by fitting a su!ciently expression as (22) can be developed as follows
smooth spline function to the signals. Hence, in practice, £ ¤T £ ¤T
the a priori developed smooth functions di (t), Rd (t), and ẋf d =  vcd 0 0 + [xf d ]× 0 0 $ cd (24)
xhd (t) can be constructed as bounded functions with su!-
ciently bounded time derivatives. Given d (t) and the time where vcd (t), $ cd (t) 5 R denote the respective desired lin-
derivative of Rd (t), ˙ d (t) can be determined. In the subse- ear1 and angular velocity of the WMR expressed in Fd .
quent tracking control development, ẋhd1 (t) and ˙ d (t) will After substituting (14) into (24), the time derivative of the
be used as feedforward control terms. translation vector xhd (t) can be written as follows
vcd
III. Control Development ẋhd1 =  + xhd2 $ cd
d (25)
The control objective is to ensure that the coordinate ẋhd2 = xhd1 $ cd
frame F tracks the time-varying trajectory of Fd (i.e.,
m̄i (t) tracks m̄di (t)). This objective is naturally defined in where (14) was utilized. After taking the time derivative
terms of the Euclidean position/orientation of the WMR. of (19) and utilizing (5) and (23), the following open-loop
Specifically, based on the previous development, the trans- error system can be obtained
lation and rotation tracking error, denoted by e(t) ,
£ ¤T d ė1 = ³
vc + d (xh2 $ c 
e1 e2 e3 5 R3 , is defined as follows ´ ẋhd1 )
˙
ė2 =  xh1 $ c + xhd1 d
³ ´ (26)
e1 , xh1  xhd1 ė3 =  $ c + ˙ d
e2 , xh2  xhd2 (19)
e3 ,   d 1 Note that vcd (t) is not measurable.

1817
where the definition of e2 (t) given in (19), and the second The following simplified expression can be obtained by tak-
equation of (25) was utilized. To facilitate the subsequent ing the time derivative of (36), substituting the closed-loop
development, the auxiliary variable ē2 (t) 5 R is defined as dynamics in (32) into the resulting expression, and then
cancelling common terms
ē2 , e2  xhd1 e3 . (27) .
1
After taking the time derivative of (27) and utilizing (26), V̇ = kv e21 + e1 d˜ (xh2 $ c  ẋhd1 )  k$ e23  d˜ dˆ . (37)
1
the following expression is obtained
.
After substituting (31) into (37), the following expression
ē2 =  (e1 $ c + ẋhd1 e3 ) . (28) can be obtained
Based on (27), it is clear that if ē2 (t), e3 (t) $ 0, then V̇ = kv e21  k$ e23 . (38)
e2 (t) $ 0. Based on this observation and the open-loop
From (36) and (38), it is clear that e1 (t), ē2 (t), e3 (t),
dynamics given in (28), the following control development
d˜ (t) 5 L4 and that e1 (t), e3 (t) 5 L5 . Since d˜ (t) 5 L4
is based on the desire to prove that e1 (t) , ē2 (t) , e3 (t) are
and d is a constant, the expression in (33) can be used
asymptotically driven to zero.
to determine that dˆ (t) 5 L4 . From the assumption that
B. Closed-Loop Error System xhd1 (t), ẋhd1 (t), xhd2 (t), d (t), and ˙ d (t) are constructed
as bounded functions, and the fact that ē2 (t), e3 (t) 5 L4 ,
Based on the open-loop error systems in (26) and (28),
the expressions in (19), (27), and (30) can be used to prove
the linear and angular velocity kinematic control inputs for
that e2 (t), xh1 (t), xh2 (t), (t), $ c (t) 5 L4 . Based on the
the WMR are designed as follows
previous development, the expressions in (29), (31), and
.
vc , kv e1  ē2 $ c + dˆ (xh2 $ c  ẋhd1 ) (29) .
(32) can be used to conclude that vc (t), dˆ (t), ė1 (t), ē2 (t),
$ c , k$ e3  ˙ d  ẋhd1 ē2 (30) ė3 (t) 5 L4 . Based on the fact that e1 (t), e3 (t), ė1 (t),
ė3 (t) 5 L4 and that e1 (t), e3 (t) 5 L5 , Barbalat’s lemma
where kv , k$ 5 R denote positive, constant control gains. [25] can be employed to prove that
In (29), the parameter update law dˆ (t) 5 R is generated
by the following dierential equation lim e1 (t), e3 (t) = 0 . (39)
t$4
.
dˆ ,  1 e1 (xh2 $ c  ẋhd1 ) (31) From (39) and the fact that the signal ẋhd1 (t)ē2 (t) is uni-
.
formly continuous (i.e., ẋhd1 (t), ẍhd1 (t), ē2 (t), ē2 (t) 5
where  1 5 R is a positive, constant adaptation gain. After L4 ), the Extended Barbalat’s Lemma [10] can be applied
substituting the kinematic control signals designed in (29) to the last equation in (32) to prove that
and (30) into (26), the following closed-loop error systems
are obtained lim ė3 (t) = 0 (40)
t$4

d ė1 = kv e1 + ē2 $ c + d˜ (xh2 $ c  ẋhd1 ) and that


.
ē2 =  (e1 $ c + ẋhd1 e3 ) (32) lim ẋhd1 (t)ē2 (t) = 0 . (41)
t$4
ė3 = k$ e3 + ẋhd1 ē2
If the desired trajectory satisfies (35), then (41) can be used
where (28) was utilized and the depth-related parameter to prove that
estimation error d˜ (t) 5 R is defined as follows lim ē2 (t) = 0 .
t$4
(42)

d˜ , d  dˆ . (33) Based on the definition of ē2 (t) given in (27), the results in
(39) and (42) can be used to conclude that
IV. Stability Analysis
lim e2 (t) = 0 (43)
Theorem 1: The adaptive update law defined in (31) t$4

along with the control input designed in (29) and (30) en- provided the condition in (35) is satisfied. ¤
sure that the WMR tracking error e (t) is asymptotically Remark 2: The condition given in (35) is in terms of the
driven to zero in the sense that time derivative of the desired translation vector. Typically,
for WMR tracking problems, this assumption is expressed
lim e (t) = 0 (34)
t$4 in terms of the desired linear and angular velocity of the
WMR. To this end, (25) can be substituted into (35) to
provided the time derivative of the desired trajectory sat-
obtain the following condition
isfies the following condition
vcd (t)
lim ẋhd1 6= 0. (35) lim 6= xhd2 (t)$ cd (t). (44)
t$4 t$4 d
Proof: To prove Theorem 1, the non-negative function
The condition in (44) is comparable to typical WMR track-
V (t) 5 R is defined as follows
ing results that restrict the desired linear and angular ve-
1  2 1 2 1 2 1 ˜2 locity. For an in-depth discussion of this type of restriction
V , d e1 + ē2 + e3 + d . (36)
2 2 2 2 1 including related previous results see [10].

1818
V. Conclusions 2003 IEEE American Control Conf., Denver, CO, June 2003,
pp. 3311-3316.
In this paper, the position/orientation of a WMR is [14] O. Faugeras, Three-Dimensional Computer Vision, The MIT
forced to track a desired time-varying trajectory defined Press, Cambridge Massachusetts, 2001.
[15] O. Faugeras and F. Lustman, “Motion and Structure From Mo-
by a prerecorded sequence of images. To achieve the re- tion in a Piecewise Planar Environment,” International Journal
sult, multiple views of three target points were used to of Pattern Recognition and Artificial Intelligence, 2(3), 1988,
develop Euclidean homographies. By decomposing the pp. 485-508.
[16] G. D. Hagar, D. J. Kriegman, A. S. Georghiades, and O. Ben-
Euclidean homographies into separate translation and ro- Shahar, “Toward Domain-Independent Navigation: Dynamic
tation components, reconstructed Euclidean information Vision and Control,” Proc. of the IEEE Conference on Deci-
was obtained for the control development. A Lyapunov- sion and Control, 1998, pp. 3257-3262.
[17] M. Hebert, “3-D Vision for Outdoor Navigation by an Au-
based stability argument was used to design an adaptive tonomous Vehicle,” Proc. of the Image Understanding Work-
update law to compensate for the fact that the recon- shop, Cambridge, UK, 1998.
structed translation signal was scaled by an unknown depth [18] B. H. Kim, et al., “Localization of a Mobile Robot using Images
of a Moving Target,” Proc. of the IEEE International Confer-
parameter. The impact that the development in this pa- ence on Robotics and Automation, 2001, pp. 253-258.
per makes is that a new analytical approach has been de- [19] D. J. Kriegman, E. Triendl, and T. O. Binford, “Stereo Vision
veloped using homography-based concepts to enable the Navigation in Buildings for Mobile Robots,” IEEE Transactions
on Robotics and Automation, 5(6), Dec. 1989, pp. 792-803.
position/orientation of a WMR subject to nonholonomic [20] Y. Ma, J. Kosecka, and S. Sastry, “Vision Guided Navigation
constraints to track a desired trajectory generated from for Nonholonomic Mobile Robot,” IEEE Trans. on robotics and
Automation, 15(3), June 1999, pp. 521-536.
a sequence of images, despite the lack of depth measure- [21] R. Mahony and T. Hamel, “Visual Servoing Using Linear Fea-
ments. Our future eorts will target experimental demon- tures for Under-actuated Rigid Body Dynamics,” Proc. of the
stration of the developed controller and the development of IEEE/RJS International Conference on Intelligent Robots and
Systems, 2001, pp. 1153—1158.
analytical Lyapunov-based methods for WMR visual servo [22] E. Malis and F. Chaumette, “Theoretical Improvements in the
tracking using an o-board camera similar to the problem Stability Analysis of a New Class of Model-Free Visual Servo-
in [9] without the restriction that the camera by fixed per- ing Methods,” IEEE Transactions on Robotics and Automation,
18(2), April 2002, pp. 176-186.
pendicular to the WMR plane of motion. [23] E. Malis and F. Chaumette, “2 1/2 D Visual Servoing with Re-
spect to Unknown Objects Through a New Estimation Scheme
References of Camera Displacement,” International Journal of Computer
Vision, 37(1), June 2000, pp. 79-97.
[1] S. Baker and S. Nayar, “A Theory of Catadioptric Image For-
[24] E. Malis, F. Chaumette, and S. Bodet, “2 1/2 D Visual Servo-
mation,” Proc. of the ICCV, pp. 35-42, Jan. 1998.
ing,” IEEE Transactions on Robotics and Automation, 15(2),
[2] D. Burschka and G. Hager, “Vision-Based Control of Mobile Ro-
April 1999, pp. 238-250.
bots,” Proc. of the IEEE International Conference on Robotics
[25] J. J. E. Slotine and W. Li, Applied Nonlinear Control, Prentice
and Automation, 2001, pp. 1707-1713.
Hall, Inc: Englewood Cli, NJ, 1991.
[3] F. Chaumette and E. Malis, “2 1/2 D Visual Servoing: A Possi-
[26] K.-T. Song and J.-H. Huang, “Fast Optical Flow Estimation and
ble Solution to Improve Image-based and Position-based Visual
Its Application to Real-time Obstacle Avoidance,” Proc. of the
Servoings,” Proc. of the IEEE International Conference on Ro-
IEEE International Conference on Robotics and Automation,
botics and Automation, 2000, pp. 630-635.
2001, pp. 2891-2896.
[4] F. Chaumette, E. Malis, and S. Boudet, “2D 1/2 Visual Servoing
[27] M. W. Spong and M. Vidyasagar, Robot Dynamic and Control,
with Respect to a Planar Object,” Proc. of the Workshop on
John Wiley and Sons, Inc: New York, NY, 1989.
New Trends in Image-Based Robot Servoing, 1997, pp. 45-52.
[28] C. E. Thorpe, M. Hebert, T. Kanade, and S. Shafer, “Vision
[5] J. Chen, A. Behal, D. Dawson, and Y. Fang, “2.5D Visual Servo-
and Navigation for the Carnegie-Mellon Navlab,” IEEE Trans.
ing with a Fixed Camera,” Proc. of the IEEE American Control
Pattern Anal. Machine Intell., 10(3), May 1988, pp. 362-373.
Conf., to appear.
[29] M. A. Turk, D. G. Morgenthaler, K. D. Gremban, and M. Marra,
[6] P. I. Corke and S. A. Hutchinson, “A New Hybrid Image-Based
“VITS-A Vision System for Autonomous Land Vehicle Naviga-
Visual Servo Control Scheme,” Proc. of the IEEE Conference
tion,” IEEE Trans. Pattern Anal. Machine Intell., 10(3), May
on Decision and Control, 2000, pp. 2521-2527.
1988, pp. 342-361.
[7] A. K. Das, et al., “Real-Time Vision-Based Control of a Non- [30] H. Y. Wang, S. Itani, T. Fukao, and N. Adachi, “Image-Based
holonomic Mobile Robot,” Proc. of the IEEE International Con-
Visual Adaptive Tracking Control of Nonholonomic Mobile Ro-
ference on Robotics and Automation, 2001, pp. 1714-1719. bots,” Proc. of the IEEE/RJS International Conference on In-
[8] K. Deguchi, “Optimal Motion Control for Image-Based Visual telligent Robots and Systems, 2001, pp. 1-6.
Servoing by Decoupling Translation and Rotation,” Proceedings [31] A. M. Waxman, et al., “A Visual Navigation System for Au-
of the Intl. Conf. on Intelligent Robots and Systems, 1998, pp. tonomous Land Vehicles,” IEEE Journal of Roboics Automa-
705-711. tion, RA-3(2), 1987, pp. 124-141.
[9] W. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, [32] Z. Zhang and A. R. Hanson, “Scaled Euclidean 3D Reconstruc-
“Adaptive Tracking Control of a Wheeled Mobile Robot via an tion Based on Externally Uncalibrated Cameras,” IEEE Symp.
Uncalibrated Camera System,” IEEE Transactions on Systems, on Computer Vision, 1995, pp. 37-42.
Man, and Cybernetics -Part B: Cybernetics, 31(3), pp. 341-352, [33] H. Zhang and J. P. Ostrowski, “Visual Servoing with Dynam-
2001. ics: Control of an Unmanned Blimp,” Proc. of the IEEE In-
[10] W. E. Dixon, D. M. Dawson, E. Zergeroglu and A. Behal, Non- ternational Conference on Robotics and Automation, 1999, pp.
linear Control of Wheeled Mobile Robots, Springer-Verlag Lon- 618-623.
don Limited, 2001.
[11] Y. Fang, A. Behal, W. E. Dixon and D. M. Dawson, “Adaptive
2.5D Visual Servoing of Kinematically Redundant Robot Manip-
ulators,” Conference on Decision and Control, Las Vegas, NV,
Dec. 2002, pp. 2860-2865.
[12] Y. Fang, D. M. Dawson, W. E. Dixon, and M. S. de Queiroz,
“2.5D Visual Servoing of Wheeled Mobile Robots,” Conference
on Decision and Control, Las Vegas, NV, Dec. 2002, pp. 2866-
2871.
[13] Y. Fang, W. E. Dixon, D. M. Dawson, and J. Chen, “Robust
2.5D Visual Servoing for Robot Manipulators,” Proc. of the

1819

You might also like